Reading – Cell phone 10pt

 TPO Reading PASSAGES

 

Contents

TPO1. 1

GROUNDWATER.. 1

The origin of theater. 2

Timberline Vegetation on Mountains. 3

TPO2. 4

THE ORIGINS OF CETACEANS. 4

DESERT FORMATION.. 5

Early Cinema. 6

TPO3. 7

ARCHITECTURE.. 7

The Long-Term Stability of Ecosystems. 8

Depletion of the Ogallala Aquifer. 9

TPO4. 10

Cave Art in Europe. 10

Deer Populations of the Puget Sound. 11

Petroleum Resources. 12

TPO5. 13

Minerals and Plants. 13

The Origin of the Pacific Island People. 14

The Cambrian Explosion. 15

TPO6. 16

Powering the industrial revolution. 16

William Smith. 17

Infantile Amnesia. 18

TPO7. 19

The Geologic History of the Mediterranean. 19

Ancient Rome and Greece. 20

Agriculture, Iron, and the Bantu Peoples. 21

TPO 8. 22

The Rise of Teotihuacán. 22

Extinction of the Dinosaurs. 23

Running Water on Mars?. 24

TPO9. 25

Colonizing the Americas via the Northwest Coast. 25

Reflection in Teaching. 26

The Arrival of Plant Life in Hawaii 27

TPO10. 28

Chinese Pottery. 28

Variations in the Climate. 29

Seventeenth-Century European Economic Growth. 30

TPO11. 31

Ancient Egyptian Sculpture. 31

Orientation and Navigation. 31

Begging by Nesting. 32

TPO12. 33

Which Hand Did They Use?. 33

Transition to Sound in Film.. 34

Water in the Desert. 35

TPO13. 36

Types of Social Groups. 36

Biological Clocks. 37

Methods of Studying Infant Perception. 38

TPO14. 39

Children and Advertising. 39

Maya Water Problems. 40

Pastoralism in Ancient Inner Eurasia. 41

TPO15. 42

Glacier Formation. 42

A Warm-Blooded Turtle. 43

Mass Extinction. 44

TPO16. 45

Trade and the Ancient Middle East. 45

Development of the Periodic Table. 46

Planets in Our Solar System.. 46

TPO17. 47

Europe’s Early Sea Trade with Asia. 48

Animal Signals in the Rain Forest. 48

Symbiotic Relationships. 49

TPO18. 50

Industrialization in the Netherlands and Scandinavia. 50

The Mystery of Yawning. 51

Lightning. 52

TPO19. 53

The Roman Army’s Impact on Britain. 53

Succession, Climax, and Ecosystems. 54

Discovering the Ice Ages. 55

TPO20. 56

Westward Migration. 56

Early Settlement in Southwest Asia. 57

Fossil Preservation. 58

TPO21. 59

Geothermal Energy. 59

The Origins of Agriculture. 60

Autobiographical Memory. 61

TPO22. 62

Spartine. 62

The Birth of Photography. 62

The Allende Meteorite. 63

TPO23. 64

Urban Climates. 64

Seventeenth-Century Dutch Agriculture. 65

Rock Art of the Australian Aborigines. 66

TPO24. 67

Lake Water. 67

Moving into Pueblos. 68

Breathing During Sleep. 69

TPO25. 70

The Surface of Mars. 70

The Decline of Venetian Shipping. 71

The Evolutionary Origin of Plants. 72

TPO 26. 73

Energy and the Industrial Revolution. 73

Survival of Plants and Animals in Desert Conditions. 74

Sumer and the First Cities of the Ancient Near East. 74

TPO 27. 75

Crafts in the Ancient Near East. 75

The Formation of Volcanic Islands. 76

Predator-Prey Cycles. 77

TPO 28. 78

Groundwater. 78

Early Saharan Pastoralists. 79

Buck Rubs and Buck Scrapes. 80

TPO 29. 81

Characteristics of Roman Pottery. 81

Competition. 82

The History of Waterpower. 83

TPO30. 84

Role of Play in Development. 84

The pace of Evolutionary Change. 85

The Invention of the Mechanical Clock. 86

TPO 31. 87

Speciation in Geographically Isolated Populations. 87

Early Childhood Education. 88

Savanna Formation. 89

TPO 32. 89

Plant Colonization. 89

Siam, 1851 – 1910. 90

Distributions of Tropical Bee Colonies. 91

TPO 33. 92

The First Civilizations. 92

Railroads and Commercial Agriculture in Nineteenth-Century United States. 93

Extinction Episodes of the Past. 94

TPO extra 1. 95

POPULATION AND CLIMATE.. 95

EUROPE IN THE TWELFTH CENTURY.. 96

WHAT IS A COMMUNITY?. 97

TPO extra 2. 98

HABITATS AND CHIPMUNK SPECIES. 98

CETACEAN INTELLIGENCE.. 99

A MODEL OF URBAN EXPANSION.. 100

TPO34. 101

Islamic Art and the Book. 101

The Development of Steam Power. 102

Protection of Plants by Insects. 103

 

TPO1

TPO1: GROUNDWATER

 

Groundwater is the word used to describe water that saturates the ground, filling all the available spaces. By far the most abundant type of groundwater is meteoric water; this is the groundwater that circulates as part of the water cycle. Ordinary meteoric water is water that has soaked into the ground from the surface, from precipitation (rain and snow) and from lakes and streams. There it remains, sometimes for long periods, before emerging at the surface again. At first thought it seems incredible that there can be enough space in the “solid” ground underfoot to hold all this water.   

The necessary space is there, however, in many forms. The commonest spaces are those among the particles—sand grains and tiny pebbles—of loose, unconsolidated sand and gravel. Beds of this material, out of sight beneath the soil, are common. They are found wherever fast rivers carrying loads of coarse sediment once flowed. For example, as the great ice sheets that covered North America during the last ice age steadily melted away, huge volumes of water flowed from them. The water was always laden with pebbles, gravel, and sand, known as glacial outwash, that was deposited as the flow slowed down.   

The same thing happens to this day, though on a smaller scale, wherever a sediment-laden river or stream emerges from a mountain valley onto relatively flat land, dropping its load as the current slows: the water usually spreads out fanwise, depositing the sediment in the form of a smooth, fan-shaped slope. Sediments are also dropped where a river slows on entering a lake or the sea, the deposited sediments are on a lake floor or the seafloor at first, but will be located inland at some future date, when the sea level falls or the land rises; such beds are sometimes thousands of meters thick.  

In lowland country almost any spot on the ground may overlie what was once the bed of a river that has since become buried by soil; if they are now below the water’s upper surface (the water table), the gravels and sands of the former riverbed, and its sandbars, will be saturated with groundwater.  

So much for unconsolidated sediments. Consolidated (or cemented) sediments, too, contain millions of minute water-holding pores. This is because the gaps among the original grains are often not totally plugged with cementing chemicals; also, parts of the original grains may become dissolved by percolating groundwater, either while consolidation is taking place or at any time afterwards. The result is that sandstone, for example; can be as porous as the loose sand from which it was formed.  

Thus a proportion of the total volume of any sediment, loose or cemented, consists of empty space. Most crystalline rocks are much more solid; a common exception is basalt, a form of solidified volcanic lava, which is sometimes full of tiny bubbles that make it very porous.  

The proportion of empty space in a rock is known as its porosity. But note that porosity is not the same as permeability, which measures the ease with which water can flow through a material; this depends on the sizes of the individual cavities and the crevices linking them.  

Much of the water in a sample of water-saturated sediment or rock will drain from it if the sample is put in a suitable dry place. But some will remain, clinging to all solid surfaces. It is held there by the force of surface tension without which water would drain instantly from any wet surface, leaving it totally dry. The total volume of water in the saturated sample must therefore be thought of as consisting of water that can, and water that cannot, drain away.   

The relative amount of these two kinds of water varies greatly from one kind of rock or sediment to another, even though their porosities may be the same. What happens depends on pore size. If the pores are large, the water in them will exist as drops too heavy for surface tension to hold, and it will drain away; but if the pores are small enough, the water in them will exist as thin films, too light to overcome the force of surface tension holding them in place; then the water will be firmly held. 

 

TPO1: The origin of theater

In seeking to describe the origins of theater, one must rely primarily on speculation, since there is little concrete evidence on which to draw. The most widely accepted theory, championed by anthropologists in the late nineteenth and early twentieth centuries, envisions theater as emerging out of myth and ritual. The process perceived by these anthropologists may be summarized briefly. During the early stages of its development, a society becomes aware of forces that appear to influence or control its food supply and well-being. Having little understanding of natural causes, it attributes both desirable and undesirable occurrences to supernatural or magical forces, and it searches for means to win the favor of these forces. Perceiving an apparent connection between certain actions performed by the group and the result it desires, the group repeats, refines and formalizes those actions into fixed ceremonies, or rituals. 

Stories (myths) may then grow up around a ritual. Frequently the myths include representatives of those supernatural forces that the rites celebrate or hope to influence. Performers may wear costumes and masks to represent the mythical characters or supernatural forces in the rituals or in accompanying celebrations. As a people becomes more sophisticated, its conceptions of supernatural forces and causal relationships may change. As a result, it may abandon or modify some rites. But the myths that have grown up around the rites may continue as part of the group’s oral tradition and may even come to be acted out under conditions divorced from these rites. When this occurs, the first step has been taken toward theater as an autonomous activity, and thereafter entertainment and aesthetic values may gradually replace the former mystical and socially efficacious concerns.   

Although origin in ritual has long been the most popular, it is by no means the only theory about how the theater came into being. Storytelling has been proposed as one alternative. Under this theory, relating and listening to stories are seen as fundamental human pleasures. Thus, the recalling of an event (a hunt, battle, or other feat) is elaborated through the narrator’s pantomime and impersonation and eventually through each role being assumed by a different person.  

A closely related theory sees theater as evolving out of dances that ate primarily pantomimic, rhythmical or gymnastic, or from imitations of animal noises and sounds. Admiration for the performer’s skill, virtuosity, and grace are seen as motivation for elaborating the activities into fully realized theatrical performances.  

In addition to exploring the possible antecedents of theater, scholars have also theorized about the motives that led people to develop theater. Why did theater develop, and why was it valued after it ceased to fulfill the function of ritual? Most answers fall back on the theories about the human mind and basic human needs. One, set forth by Aristotle in the fourth century B.C., sees humans as naturally imitative—as taking pleasure in imitating persons, things, and actions and in seeing such imitations. Another, advanced in the twentieth century, suggests that humans have a gift for fantasy, through which they seek to reshape reality into more satisfying forms than those encountered in daily life. Thus, fantasy or fiction (of which drama is one form) permits people to objectify their anxieties and fears, confront them, and fulfill their hopes in fiction if not fact. The theater, then, is one tool whereby people define and understand their world or escape from unpleasant realities. 

But neither the human imitative instinct nor a penchant for fantasy by itself leads to an autonomous theater. Therefore, additional explanations are needed. One necessary condition seems to be a somewhat detached view of human problems. For example, one sign of this condition is the appearance of the comic vision, since comedy requires sufficient detachment to view some deviations from social norms as ridiculous rather than as serious threats to the welfare of the entire group. Another condition that contributes to the development of autonomous theater is the emergence of the aesthetic sense. For example, some early societies ceased to consider certain rites essential to their well-being and abandoned them, nevertheless, they retained as parts of their oral tradition the myths that had grown up around the rites and admired them for their artistic qualities rather than for their religious usefulness.

    

TPO1: Timberline Vegetation on Mountains

    The transition from forest to treeless tundra on a mountain slope is often a dramatic one. Within a vertical distance of just a few tens of meters, trees disappear as a life-form and are replaced by low shrubs, herbs, and grasses. This rapid zone of transition is called the upper timberline or tree line. In many semiarid areas there is also a lower timberline where the forest passes into steppe or desert at its lower edge, usually because of a lack of moisture.

    The upper timberline, like the snow line, is highest in the tropic and lowest in the polar regions. It ranges from sea level in the polar regions to 4,500 meters in the dry subtropics and 3,500 – 4,500 meters in the moist tropics. Timberline trees are normally evergreens, suggesting that these have some advantage over deciduous trees (those that lose their leaves) in the extreme environments of the upper timberline. There are some areas, however, where broadleaf deciduous trees form the timberline. Species of birch, for example, may occur at the timberline in parts of the Himalayas.

    At the upper timberline the trees begin to become twisted and deformed. This is particularly true for trees in the middle and upper latitudes, which tend to attain greater heights on ridges, whereas in the tropics the trees reach their greater heights in the valleys. This is because middle- and upper-latitude timberlines are strongly influenced by the duration and depth of the snow cover. As the snow is deeper and lasts longer in the valleys, trees tend to attain greater heights on the ridges, even though they are more exposed to high-velocity winds and poor, thin soils there. In the tropics, the valleys appear to be more favorable because they are less prone to dry out, they have less frost, and they have deeper soils.

    There is still no universally agreed-on explanation for why there should be such a dramatic cessation of tree growth at the upper timberline. Various environmental factors may play a role. Too much snow, for example, can smother trees, and avalanches and snow creep can damage or destroy them. Late-lying snow reduces the effective growing season to the point where seedlings cannot establish themselves. Wind velocity also increases with altitude and may cause serious stress for trees, as is made evident by the deformed shapes at high altitudes. Some scientists have proposed that the presence of increasing levels of ultraviolet light with elevation may play a role, while browsing and grazing animals like the ibex may be another contributing factor. Probably the most important environmental factor is temperature, for if the growing season is too short and temperatures are too low, tree shoots and buds cannot mature sufficiently to survive the winter months.

    Above the tree line there is zone that is generally called alpine tundra.  Immediately adjacent to the timberline, the tundra consists of a fairly complete cover of low-lying shrubs, herbs, and grasses, while higher up the number and diversity of species decrease until there is much bare ground with occasional mosses and lichens and some prostrate cushion plants.  Some plants can even survive in favorable microhabitats above the snow line. The highest plants in the world occur at around 6,100 meters on Makalu in the Himalayas.  At this great height, rocks, warmed by the sun, melt small snowdrifts. 

    The most striking characteristic of the plants of the alpine zone is their low growth form. This enables them to avoid the worst rigors of high winds and permits them to make use of the higher temperatures immediately adjacent to the ground surface. In an area where low temperatures are limiting to life, the importance of the additional heat near the surface is crucial. The low growth form can also permit the plants to take advantage of the insulation provided by a winter snow cover. In the equatorial mountains the low growth form is less prevalent.

TPO2

TPO2: THE ORIGINS OF CETACEANS

 Paragraph 1:How did it come about that farming developed independently in a number of world centers (the Southeast Asian mainland, Southwest Asia, Central America, lowland and highland South America, and equatorial Africa) at more or less the same time? Agriculture developed slowly among populations that had an extensive knowledge of plants and animals. Changing from hunting and gathering to agriculture had no immediate advantages. To start with, it forced the population to abandon the nomad’s life and became sedentary, to develop methods of storage and, often, systems of irrigation. While hunter-gatherers always had the option of moving elsewhere when the resources were exhausted, this became more difficult with farming. Furthermore, as the archaeological record shows, the state of health of agriculturalists was worse than that of their contemporary hunter-gatherers.

Paragraph 2:Traditionally, it was believed that the transition to agriculture was the result of a worldwide population crisis. It was argued that once hunter-gatherers had occupied the whole world, the population started to grow everywhere and food became scarce; agriculture would have been a solution to this problem. We know, however, that contemporary hunter-gatherer societies control their population in a variety of ways. The idea of a world population crisis is therefore unlikely, although population pressure might have arisen in some areas.

Paragraph 3:Climatic changes at the end of the glacial period 13,000 years ago have been proposed to account for the emergence of farming. The temperature increased dramatically in a short period of time (years rather than centuries), allowing for a growth of the hunting-gathering population due to the abundance of resources. There were, however, fluctuations in the climatic conditions, with the consequences that wet conditions were followed by dry ones, so that the availability of plants and animals oscillated brusquely.

Paragraph 4:It would appear that the instability of the climatic conditions led populations that had originally been nomadic to settle down and develop a sedentary style of life, which led in turn to population growth and to the need to increase the amount of food available. Farming originated in these conditions. Later on, it became very difficult to change because of the significant expansion of these populations. It could be argued, however, that these conditions are not sufficient to explain the origins of agriculture. Earth had experienced previous periods of climatic change, and yet agriculture had not been developed.

.

Paragraph 5:It is archaeologist Steven Mithen’s thesis, brilliantly developed in his book The Prehistory of the Mind(1996), that approximately 40,000 years ago the human mind developed cognitive fluidity, that is, the integration of the specializations of the mind: technical, natural history (geared to understanding the behavior and distribution of natural resources), social intelligence, and the linguistic capacity. Cognitive fluidity explains the appearance of art, religion, and sophisticated speech. Once humans possessed such a mind, they were able to find an imaginative solution to a situation of severe economic crisis such as the farming dilemma described earlier. Mithen proposes the existence of four mental elements to account for the emergence of farming: (1) the ability to develop tools that could be used intensively to harvest and process plant resources; (2) the tendency to use plants and animals as the medium to acquire social prestige and power; (3) the tendency to develop “social relationships” with animals structurally similar to those developed with people—specifically, the ability to think of animals as people (anthropomorphism) and of people as animals (totemism); and (4) the tendency to manipulate plants and animals.

Paragraph 6:The fact that some societies domesticated animals and plants, discovered the use of metal tools, became literate, and developed a state should not make us forget that others developed pastoralism or horticulture (vegetable gardening) but remained illiterate and at low levels of productivity; a few entered the modem period as hunting and gathering societies. It is anthropologically important to inquire into the conditions that made some societies adopt agriculture while others remained hunter-gatherers or horticulturalists. However it should be kept in mind that many societies that knew of agriculture more or less consciously avoided it. Whether Mithen’s explanation is satisfactory is open to contention, and some authors have recently emphasized the importance of other factors.

TPO2: DESERT FORMATION

The deserts, which already occupy approximately a fourth of the Earth’s land surface, have in recent decades been increasing at an alarming pace. The expansion of desertlike conditions into areas where they did not previously exist is called desertification. It has been estimated that an additional one-fourth of the Earth’s land surface is threatened by this process.

Desertification is accomplished primarily through the loss of stabilizing natural vegetation and the subsequent accelerated erosion of the soil by wind and water. In some cases the loose soil is blown completely away, leaving a stony surface. In other cases, the finer particles may be removed, while the sand-sized particles are accumulated to form mobile hills or ridges of sand.

Even in the areas that retain a soil cover, the reduction of vegetation typically results in the loss of the soil’s ability to absorb substantial quantities of water. The impact of raindrops on the loose soil tends to transfer fine clay particles into the tiniest soil spaces, sealing them and producing a surface that allows very little water penetration. Water absorption is greatly reduced; consequently runoff is increased, resulting in accelerated erosion rates. The gradual drying of the soil caused by its diminished ability to absorb water results in the further loss of vegetation, so that a cycle of progressive surface deterioration is established.

In some regions, the increase in desert areas is occurring largely as the result of a trend toward drier climatic conditions. Continued gradual global warming has produced an increase in aridity for some areas over the past few thousand years. The process may be accelerated in subsequent decades if global warming resulting from air pollution seriously increases.

There is little doubt, however, that desertification in most areas results primarily from human activities rather than natural processes. The semiarid lands bordering the deserts exist in a delicate ecological balance and are limited in their potential to adjust to increased environmental pressures. Expanding populations are subjecting the land to increasing pressures to provide them with food and fuel. In wet periods, the land may be able to respond to these stresses. During the dry periods that are common phenomena along the desert margins, though, the pressure on the land is often far in excess of its diminished capacity, and desertification results.

Four specific activities have been identified as major contributors to the desertification processes: overcultivation, overgrazing, firewood gathering, and overirrigation. The cultivation of crops has expanded into progressively drier regions as population densities have grown. These regions are especially likely to have periods of severe dryness, so that crop failures are common. Since the raising of most crops necessitates the prior removal of the natural vegetation, crop failures leave extensive tracts of land devoid of a plant cover and susceptible to wind and water erosion.

The raising of livestock is a major economic activity in semiarid lands, where grasses are generally the dominant type of natural vegetation. The consequences of an excessive number of livestock grazing in an area are the reduction of the vegetation cover and the trampling and pulverization of the soil. This is usually followed by the drying of the soil and accelerated erosion.

Firewood is the chief fuel used for cooking and heating in many countries. The increased pressures of expanding populations have led to the removal of woody plants so that many cities and towns are surrounded by large areas completely lacking in trees and shrubs. The increasing use of dried animal waste as a substitute fuel has also hurt the soil because this valuable soil conditioner and source of plant nutrients is no longer being returned to the land.

The final major human cause of desertification is soil salinization resulting from overirrigation. Excess water from irrigation sinks down into the water table. If no drainage system exists, the water table rises, bringing dissolved salts to the surface. The water evaporates and the salts are left behind, creating a white crustal layer that prevents air and water from reaching the underlying soil.

The extreme seriousness of desertification results from the vast areas of land and the tremendous numbers of people affected, as well as from the great difficulty of reversing or even slowing the process. Once the soil has been removed by erosion, only the passage of centuries or millennia will enable new soil to form. In areas where considerable soil still remains, though, a rigorously enforced program of land protection and cover-crop planting may make it possible to reverse the present deterioration of the surface.

TPO2: Early Cinema

The cinema did not emerge as a form of mass consumption until its technology evolved from the initial “peepshow” format to the point where images were projected on a screen in a darkened theater. In the peepshow format, a film was viewed through a small opening in a machine that was created for that purpose. Thomas Edison’s peepshow device, the Kinetoscope, was introduced to the public in 1894. It was designed for use in Kinetoscope parlors, or arcades, which contained only a few individual machines and permitted only one customer to view a short, 50-foot film at any one time. The first Kinetoscope parlors contained five machines. For the price of 25 cents (or 5 cents per machine), customers moved from machine to machine to watch five different films (or, in the case of famous prizefights, successive rounds of a single fight).

These Kinetoscope arcades were modeled on phonograph parlors, which had proven successful for Edison several years earlier. In the phonograph parlors, customers listened to recordings through individual ear tubes, moving from one machine to the next to hear different recorded speeches or pieces of music. The Kinetoscope parlors functioned in a similar way. Edison was more interested in the sale of Kinetoscopes (for roughly $1,000 apiece) to these parlors than in the films that would be run in them (which cost approximately $10 to $15 each). He refused to develop projection technology, reasoning that if he made and sold projectors, then exhibitors would purchase only one machine-a projector-from him instead of several.

Exhibitors, however, wanted to maximize their profits, which they could do more readily by projecting a handful of films to hundreds of customers at a time (rather than one at a time) and by charging 25 to 50 cents admission. About a year after the opening of the first Kinetoscope parlor in 1894, showmen such as Louis and Auguste Lumiere, Thomas Armat and Charles Francis Jenkins, and Orville and Woodville Latham (with the assistance of Edison’s former assistant, William Dickson) perfected projection devices. These early projection devices were used in vaudeville theaters, legitimate theaters, local town halls, makeshift storefront theaters, fairgrounds, and amusement parks to show films to a mass audience.

With the advent of projection in 1895-1896, motion pictures became the ultimate form of mass consumption. Previously, large audiences had viewed spectacles at the theater, where vaudeville, popular dramas, musical and minstrel shows, classical plays, lectures, and slide-and-lantern shows had been presented to several hundred spectators at a time. But the movies differed significantly from these other forms of entertainment, which depended on either live performance or (in the case of the slide-and-lantern shows) the active involvement of a master of ceremonies who assembled the final program.

Although early exhibitors regularly accompanied movies with live acts, the substance of the movies themselves is mass-produced, prerecorded material that can easily be reproduced by theaters with little or no active participation by the exhibitor. Even though early exhibitors shaped their film programs by mixing films and other entertainments together in whichever way they thought would be most attractive to audiences or by accompanying them with lectures, their creative control remained limited. What audiences came to see was the technological marvel of the movies; the lifelike reproduction of the commonplace motion of trains, of waves striking the shore, and of people walking in the street; and the magic made possible by trick photography and the manipulation of the camera.

With the advent of projection, the viewer’s relationship with the image was no longer private, as it had been with earlier peepshow devices such as the Kinetoscope and the Mutoscope, which was a similar machine that reproduced motion by means of successive images on individual photographic cards instead of on strips of celluloid. It suddenly became public-an experience that the viewer shared with dozens, scores, and even hundreds of others. At the same time, the image that the spectator looked at expanded from the minuscule peepshow dimensions of 1 or 2 inches (in height) to the life-size proportions of 6 or 9 feet.

TPO3

TPO3: ARCHITECTURE

Architecture is the art and science of designing structures that organize and enclose space for practical and symbolic purposes. Because architecture grows out of human needs and aspirations, it clearly communicates cultural values. Of all the visual arts, architecture affects our lives most directly for it determines the character of the human environment in major ways.

Architecture is a three-dimensional form. It utilizes space, mass, texture, line, light, and color. To be architecture, a building must achieve a working harmony with a variety of elements. Humans instinctively seek structures that will shelter and enhance their way of life. It is the work of architects to create buildings that are not simply constructions but also offer inspiration and delight. Buildings contribute to human life when they provide shelter, enrich space, complement their site, suit the climate, and are economically feasible. The client who pays for the building and defines its function is an important member of the architectural team. The mediocre design of many contemporary buildings can be traced to both clients and architects.

In order for the structure to achieve the size and strength necessary to meet its purpose, architecture employs methods of support that, because they are based on physical laws, have changed little since people first discovered them-even while building materials have changed dramatically. The world’s architectural structures have also been devised in relation to the objective limitations of materials. Structures can be analyzed in terms of how they deal with downward forces created by gravity. They are designed to withstand the forces of compression (pushing together), tension

(pulling apart), bending, or a combination of these in different parts of the structure.

Even development in architecture has been the result of major technological changes. Materials and methods of construction are integral parts of the design of architecture structures. In earlier times it was necessary to design structural systems suitable for the materials that were available, such as wood, stone, brick. Today technology has progressed to the point where it is possible to invent new building materials to suit the type of structure desired. Enormous changes in materials and techniques of construction within the last few generations have made it possible to enclose space with much greater ease and speed and with a minimum of material. Progress in this area can be measured by the difference in weight between buildings built now and those of comparable size built one hundred ago.

Modern architectural forms generally have three separate components comparable to elements of the human body; a supporting skeleton or frame, an outer skin enclosing the interior spaces, equipment, similar to the body’s vital organs and systems. The equipment includes plumbing, electrical wiring, hot water, and air-conditioning. Of course in early architecture—such as igloos and adobe structures—there was no such equipment, and the skeleton and skin were often one.

Much of the world’s great architecture has been constructed of stone because of its beauty, permanence, and availability. In the past, whole cities grew from the arduous task of cutting and piling stone upon. Some of the world’s finest stone architecture can be seen in the ruins of the ancient Inca city of Machu Picchu high in the eastern Andes Mountains of Peru. The doorways and windows are made possible by placing over the open spaces thick stone beams that support the weight from above. A structural invention had to be made before the physical limitations of stone could be overcome and new architectural forms could be created. That invention was the arch, a curved structure originally made of separate stone or brick segments. The arch was used was used by the early cultures of the Mediterranean area chiefly for underground drains, but it was the Romans who first developed and used the arch extensively in aboveground structures. Roman builders perfected the semicircular arch made of separate blocks of stone. As a method of spanning space, the arch can support greater weight than a horizontal beam. It works in compression to divert the weight above it out to the sides, where the weight is borne by the vertical elements on either side of the arch. The arch is among the many important structural breakthroughs that have characterized architecture throughout the centuries.

 

TPO3: The Long-Term Stability of Ecosystems

Plant communities assemble themselves flexibly, and their particular structure depends on the specific history of the area. Ecologists use the term “succession” to refer to the changes that happen in plant communities and ecosystems over time. The first community in a succession is called a pioneer community, while the long-lived community at the end of succession is called a climax community. Pioneer and successional plant communities are said to change over periods from 1 to 500 years. These changes—in plant numbers and the mix of species—are cumulative. Climax communities themselves change but over periods of time greater than about 500 years.

An ecologist who studies a pond today may well find it relatively unchanged in a year’s time. Individual fish may be replaced, but the number of fish will tend to be the same from one year to the next. We can say that the properties of an ecosystem are more stable than the individual organisms that compose the ecosystem.

At one time, ecologists believed that species diversity made ecosystems stable. They believed that the greater the diversity the more stable the ecosystem. Support for this idea came from the observation that long-lasting climax communities usually have more complex food webs and more species diversity than pioneer communities. Ecologists concluded that the apparent stability of climax ecosystems depended on their complexity. To take an extreme example, farmlands dominated by a single crop are so unstable that one year of bad weather or the invasion of a single pest can destroy the entire crop. In contrast, a complex climax community, such as a temperate forest, will tolerate considerable damage from weather of pests.

The question of ecosystem stability is complicated, however. The first problem is that ecologists do not all agree what “stability” means. Stability can be defined as simply lack of change. In that case, the climax community would be considered the most stable, since, by definition, it changes the least over time. Alternatively, stability can be defined as the speed with which an ecosystem returns to a particular form following a major disturbance, such as a fire. This kind of stability is also called resilience. In that case, climax communities would be the most fragile and the least stable, since they can require hundreds of years to return to the climax state.

Even the kind of stability defined as simple lack of change is not always associated with maximum diversity. At least in temperate zones, maximum diversity is often found in mid-successional stages, not in the climax community. Once a redwood forest matures, for example, the kinds of species and the number of individuals growing on the forest floor are reduced. In general, diversity, by itself, does not ensure stability. Mathematical models of ecosystems likewise suggest that diversity does not guarantee ecosystem stability—just the opposite, in fact. A more complicated system is, in general, more likely than a simple system to break down. A fifteen-speed racing bicycle is more likely to break down than a child’s tricycle.

Ecologists are especially interested to know what factors contribute to the resilience of communities because climax communities all over the world are being severely damaged or destroyed by human activities. The destruction caused by the volcanic explosion of Mount St. Helens, in the northwestern United States, for example, pales in comparison to the destruction caused by humans. We need to know what aspects of a community are most important to the community’s resistance to destruction, as well as its recovery.

Many ecologists now  think  that the  relative  long-­term stability of climax  communities comes not from diversity  but from the  “patchiness”  of the  environment, an environment that  varies from place  to place  supports more  kinds of organisms than an environment that  is uniform. A local population that goes extinct is quickly replaced by immigrants from an adjacent community. Even if the new population is of a different species, it can approximately fill the niche vacated by the extinct population and keep the food web intact.

Ecologists are  especially  interested  to  know  what  factors contribute  to the  resilience  of communities because  climax  communities all over  the  world  are  being  severely damaged  or destroyed  by human  activities. The  destruction caused  by  the  volcanic  explosion of Mount St. Helens, in the  northwestern United  States, for example,  pales in comparison  to  the  destruction caused  by humans. We need  to know  what  aspects of a  community are  most important to the  community’s resistance to  destruction, as well as its recovery.

TPO3: Depletion of the Ogallala Aquifer

The vast grasslands of the High Plains in the central United States were settled by farmers and ranchers in the 1880’s. This region has a semiarid climate, and for 50 years after its settlement, it supported a low-intensity agricultural economy of cattle ranching and wheat farming. In the early twentieth century, however, it was discovered that much of the High Plains was underlain by a huge aquifer (a rock layer containing large quantities of groundwater). This aquifer was named the Ogallala aquifer after the Ogallala Sioux Indians, who once inhabited the region.

The Ogallala aquifer is a sandstone formation that underlies some 583,000 square kilometers of land extending from northwestern Texas to southern South Dakota. Water from rains and melting snows has been accumulating in the Ogallala for the past 30,000 years. Estimates indicate that the aquifer contains enough water to fill Lake Huron, but unfortunately, under the semiarid climatic conditions that presently exist in the region, rates of addition to the aquifer are minimal, amounting to about half a centimeter a year.

The first wells were drilled into the Ogallala during the drought years of the early

1930’s. The ensuing rapid expansion of irrigation agriculture, especially from the 1950’s onward, transformed the economy of the region. More than 100,000 wells now tap the Ogallala. Modern irrigation devices, each capable of spraying 4.5 million liters of water a day, have produced a landscape dominated by geometric patterns of circular green islands of crops. Ogallala water has enabled the High Plains region to supply significant amounts of the cotton, sorghum, wheat, and corn grown in the United States. In addition, 40 percent of American grain-fed beef cattle are fattened here.

This unprecedented development of a finite groundwater resource with an almost negligible natural recharge rate—that is, virtually no natural water source to replenish the water supply—has caused water tables in the region to fall drastically. In the 1930’s, wells encountered plentiful water at a depth of about 15 meters; currently, they must be dug to depths of 45 to 60 meters or more. In places, the water table is declining at a rate of a meter a year, necessitating the periodic deepening of wells and the use of ever-more-powerful pumps. It is estimated that at current withdrawal rates, much of the aquifer will run dry within 40 years. The situation is most critical in Texas, where the climate is driest, the greatest amount of water is being pumped, and the aquifer contains the least water. It is projected that the remaining Ogallala water will, by the year 2030, support only 35 to 40 percent of the irrigated acreage in Texas that is supported in 1980.

The reaction of farmers to the inevitable depletion of the Ogallala varies. Many have been attempting to conserve water by irrigating less frequently or by switching to crops that require less water. Other, however, have adopted the philosophy that it is best to use the water while it is still economically profitable to do so and to concentrate on high-value crops such as cotton. The incentive of the farmers who wish to conserve water is reduced by their knowledge that many of their neighbors are profiting by using great amounts of water, and in the process are drawing down the entire region’s water supplies.

In the face of the upcoming water supply crisis, a number of grandiose schemes have been developed to transport vast quantities of water by canal or pipeline from the Mississippi, the Missouri, or the Arkansas rivers. Unfortunately, the cost of water obtained through any of these schemes would increase pumping costs at least tenfold, making the cost of irrigated agricultural products from the region uncompetitive on the national and international markets. Somewhat more promising have been recent experiments for releasing capillary water (water in the soil) above the water table by injecting compressed are into the ground. Even if this process proves successful, however, it would almost triple water costs. Genetic engineering also may provide a partial solution, as new strains of drought-resistant crops continue to be developed. Whatever the final answer to the water crisis may be, it is evident that within the High Plains, irrigation water will never again be the abundant, inexpensive resource it was during the agricultural boom years of the mid-twentieth century.

 

 

TPO4

TPO4: Cave Art in Europe

The earliest discovered traces of art are beads and carvings, and then paintings, from sites dating back to the Upper Paleolithic period. We might expect that early artistic efforts would be crude, but the cave paintings of Spain and southern France show a marked degree of skill. So do the naturalistic paintings on slabs of stone excavated in southern Africa. Some of those slabs appear to have been painted as much as 28,000 years ago, which suggests that painting in Africa is as old as painting in Europe. But painting may be even order than that. The early Australians may have painted on the walls of rock shelters and cliff faces at least 30,000 years ago, and maybe as much as 60,000 years ago.

    The researchers Peter Ucko and Andree Rosenfeld identified three principal locations of paintings in the caves of western Europe: (1) in obviously inhabited rock shelters and cave entrances; (2) in galleries immediately off the inhabited areas of caves; and (3) in the inner reaches of caves, whose difficulty of access has been interpreted by some as a sign that magical-religious activities were performed there.

    The subjects of the paintings are mostly animals. The paintings rest on bare walls, with no backdrops or environmental trappings. Perhaps, like many contemporary peoples, Upper Paleolithic men and women believed that the drawing of a human image could cause death of injury, and if that were indeed their belief, it might explain why human figures are rarely depicted in cave art. Another explanation for the focus on animals might be that these people sought to improve their luck at hunting. This theory is suggested by evidence of chips in the painted figures, perhaps made by spears thrown at the drawings. But if improving their hunting luck was the chief motivation for the paintings, it is difficult to explain why only a few show signs of having been speared. Perhaps the paintings were inspired by the need to increase the supply of animals. Cave art seems to have reached a peak toward the end of the Upper Paleolithic period, when the herds of game were decreasing.

    The particular symbolic significance of the cave paintings in southwestern France is more explicitly revealed, perhaps, by the results of a study conducted by researchers Patricia Rice and Ann Paterson. The data they present suggest that the animals portrayed in the cave paintings were mostly the ones that the painters preferred for meat and for materials such as hides. For example, wild cattle (bovines) and horses are portrayed more often than we would expect by chance, probably because they were larger and heavier (meatier) than other animals in the environment. In addition, the paintings mostly portray animals that the painters may have feared the most because of their size, speed, natural weapons such as tusks and horns, and the unpredictability of their behavior. That is, mammoths, bovines, and horses are portrayed more often than deer and reindeer. Thus, the paintings are consistent with the idea that the art is related to the importance of hunting in the economy of Upper Paleolithic people. Consistent with this idea, according to the investigators, is the fact that the art of the cultural period that followed the Upper Paleolithic also seems to reflect how people got their food. But in that period, when getting food no longer depended on hunting large game animals (because they were becoming extinct), the art ceased to focus on portrayals of animals.

    Upper Paleolithic art was not confined to cave paintings. Many shafts of spears and similar objects were decorated with figures of animals. The anthropologist Alexander Marshack has an interesting interpretation of some of the engravings made during the Upper Paleolithic. He believes that as far back as 30.000 B.C., hunters may have used a system of notation, engraved on bone and stone, to mark phases of the Moon. If this is true, it would mean that Upper Paleolithic people were capable of complex thought and were consciously aware of their environment. In addition to other artworks, figurines representing the human female in exaggerated form have also been found at Upper Paleolithic sites. It has been suggested that these figurines were an ideal type or an expression of a desire fertility.

TPO4: Deer Populations of the Puget Sound

Two species of deer have been prevalent in the Puget Sound area of Washington State in the Pacific Northwest of the United States. The black-tailed deer, lowland, west-side cousin of the mule deer of eastern Washington, is now the most common. The other species, the Columbian white-tailed deer, in earlier times was common in the open prairie country, it is now restricted to the low, marshy islands and flood plains along the lower Columbia River.

    Nearly any kind of plant of the forest understory can be part of a deer’s diet. Where the forest inhibits the growth of grass and other meadow plants, the black-tailed deer browses on huckleberry, salal, dogwood, and almost any other shrub or herb. But this is fair-weather feeding. What keeps the black-tailed deer alive in the harsher seasons of plant decoy and dormancy? One compensation for not hibernating is the built-in urge to migrate. Deer may move from high-elevation browse areas in summer down to the lowland areas in late fall. Even with snow on the ground, the high bushy understory is exposed; also snow and wind bring down leafy branches of cedar, hemlock, red alder, and other arboreal fodder.

    The numbers of deer have fluctuated markedly since the entry of Europeans into Puget Sound country. The early explorers and settlers told of abundant deer in the early 1800s and yet almost in the same breath bemoaned the lack of this succulent game animal. Famous explorers of the north American frontier, lewis and had experienced great difficulty finding game west of the Rockies and not until the second of December did they kill their first elk. To keep 40 people alive that winter, they consumed approximately 150 elk and 20 deer. And when game moved out of the lowlands in early spring, the expedition decided to return east rather than face possible starvation. Later on in the early years of the nineteenth century, when Fort Vancouver became the headquarters of the Hudson’s Bay Company, deer populations continued to fluctuate. David Douglas, Scottish botanical explorer of the 1830s. Found a disturbing change in the animal life around the fort during the period between his first visit in 1825 and his final contact with the fort in 1832. A recent Douglas biographer states:” The deer which once picturesquely dotted the meadows around the fort were gone [in 1832], hunted to extermination in order to protect the crops.”

    Reduction in numbers of game should have boded ill for their survival in later times. A worsening of the plight of deer was to be expected as settlers encroached on the land, logging, burning, and clearing, eventually replacing a wilderness landscape with roads, cities, towns, and factories. No doubt the numbers of deer declined still further. Recall the fate of the Columbian white-tailed deer, now in a protected status. But for the black-tailed deer, human pressure has had just the opposite effect. Wild life zoologist Hulmut Buechner(1953), in reviewing the nature of biotic changes in Washington through recorded time, Says that “since the early 1940s, the state has had more deer than at any other time in its history, the winter population fluctuating around approximately 320,000 deer (mule and black-tailed deer), which will yield about 65,000 of either sex and any age annually for an indefinite period.”

    The causes of this population rebound are consequences of other human actions. First, the major predators of deer—wolves, cougar, and lynx–have been greatly reduced in numbers. Second, conservation has been insured by limiting times for and types of hunting. But the most profound reason for the restoration of high population numbers has been the gate of the forests. Great tracts of lowland country deforested by logging, fire, or both have become ideal feeding grounds of deer. In addition to finding an increase of suitable browse, like huckleberry and vine maple, Arthur Einarsen, longtime game biologist in the Pacific Northwest, found quality of browse in the open areas to be substantially more nutritive. The protein content of shade- grown vegetation, for example, was much lower than that for plants grown in clearings.

TPO4: Petroleum Resources

Petroleum, consisting of crude oil and natural gas, seems to originate from organic matter in marine sediment. Microscopic organisms settle to the seafloor and accumulate in marine mud. The organic matter may partially decompose, using up the dissolved oxygen in the sediment. As soon as the oxygen is gone, decay stops and the remaining organic matter is preserved.    

    Continued sedimentation—the process of deposits’ settling on the sea bottom—buries the organic matter and subjects it to higher temperatures and pressures, which convert the organic matter to oil and gas. As muddy sediments are pressed together, the gas and small droplets of oil may be squeezed out of the mud and may move into sandy layers nearby. Over long periods of time (millions of years), accumulations of gas and oil can collect in the sandy layers. Both oil and gas are less dense than water, so they generally tend to rise upward through water-saturated rock and sediment.    

Oil pools are valuable underground accumulations of oil, and oil fields are regions underlain by one or more oil pools. When an oil pool or field has been discovered, wells are drilled into the ground. Permanent towers, called derricks, used to be built to handle the long sections of drilling pipe. Now-portable drilling machines are set up and are then dismantled and removed. When the well reaches a pool, oil usually rises up the well because of its density difference with water beneath it or because of the pressure of expanding gas trapped above it. Although this rise of oil is almost always carefully controlled today, spouts of oil, or gushers, were common in the past. Gas pressure gradually dies out, and oil is pumped from the well. Water or steam may be pumped down adjacent wells to help push the oil out. At a refinery, the crude oil from underground is separated into natural gas, gasoline, kerosene, and various oils. Petrochemicals such as dyes, fertilizer, and plastic are also manufactured from the petroleum.   

 

As oil becomes increasingly difficult to find, the search for it is extended into more-hostile environments. The development of the oil field on the North Slope of Alaska and the construction the Alaska pipeline are examples of the great expense and difficulty involved in new oil discoveries. Offshore drilling platforms extend the search for oil to the ocean’s continental shelves—those gently sloping submarine regions at the edges of the continents. More than one-quarter of the world’s oil and almost one-fifth of the world’s natural gas come from offshore, even though offshore drilling is six to seven times more expensive than drilling on land. A significant part of this oil and gas comes from under the North Sea between Great Britain and Norway.   

Of course, there is far more oil underground than can be recovered. It may be in a pool too small or too far from a potential market to justify the expense of drilling. Some oil lies under regions where drilling is forbidden, such as national parks or other public lands. Even given the best extraction techniques, only about 30 to 40 percent of the oil in a given pool can be brought to the surface. The rest is far too difficult to extract and has to remain underground. 

    Moreover, getting petroleum out of the ground and from under the sea and to the consumer can create environmental problems anywhere along the line. Pipelines carrying oil can be broken by faults or landslides, causing serious oil spills. Spillage from huge oil-carrying cargo ships, called tankers, involved in collisions or accidental groundings (such as the one off Alaska in 1989) can create oil slicks at sea. Offshore platforms may also lose oil, creating oil slicks that drift ashore and foul the beaches, harming the environment. Sometimes, the ground at an oil field may subside as oil is removed. The Wilmington field near Long Beach, California, has subsided nine meters in 50 years; protective barriers have had to be built to prevent seawater from flooding the area. Finally, the refining and burning of petroleum and its products can cause air pollution. Advancing technology and strict laws, however, are helping control some of these adverse environmental effects.  

TPO5

TPO5: Minerals and Plants

Research has shown that certain minerals are required by plants for normal growth and development. The soil is the source of these minerals, which are absorbed by the plant with the water from the soil. Even nitrogen, which is a gas in its elemental state, is normally absorbed from the soil as nitrate ions. Some soils are notoriously deficient in micro nutrients and are therefore unable to support most plant life. So-called serpentine soils, for example, are deficient in calcium, and only plants able to tolerate low levels of this mineral can survive. In modern agriculture, mineral depletion of soils is a major concern, since harvesting crops interrupts the recycling of nutrients back to the soil.

Mineral deficiencies can often be detected by specific symptoms such as chlorosis (loss of chlorophyll resulting in yellow or white leaf tissues), necrosis (isolated dead patches), anthocyanin formation (development of deep red pigmentation of leaves or stem), stunted growth, and development of woody tissues in an herbaceous plant. Soils are most commonly deficient in nitrogen and phosphorus. Nitrogen-deficient plants exhibit many of the symptoms just described. Leaves develop chlorosis, stems are short and slender, and anthocyanin discoloration occurs on stems, petioles, and lower leaf surfaces. Phosphorus-deficient plants are often stunted, with leaves turning a characteristic dark green often with the accumulation of anthocyanin. Typically, older leaves are affected first as the phosphorus is mobilized to young growing tissue. Iron deficiency is characterized by chlorosis between veins in young leaves.

Much of the research on nutrient deficiencies is based on growing plants hydroponically, that is, in soiless liquid nutrient solution. This technique allows researchers to create solutions that selectively omit certain nutrients and then observe the resulting effects on the plants. Hydroponics has applications beyond basic research, since it facilitates the growing of greenhouse vegetables, during winter. Aeroponics, a technique in which plants are suspended and the roots misted with a nutrient solution, is another method for growing plants without soil.

While mineral deficiencies can limit the growth of plants, an overabundance of certain minerals can be toxic and can also limit growth. Saline soils, which have high concentrations of sodium chloride and other salts, limit plant growth, and research continues to focus on developing salt-tolerant varieties of agricultural crops. Research has focused on the toxic effects of heavy metals such as lead, cadmium, mercury and aluminum, however, even copper and zinc, which are essential elements, can become toxic in high concentrations. Although most plants cannot survive in these soils, certain plants have the ability to tolerate high levels of these minerals.

Scientists have known for some time that certain plants, called hyperaccumulators, can concentrate minerals at levels a hundredfold or greater than normal. A survey of known hyperaccululators identified that 75 percent of them amassed nickel, cobalt, copper,, zinc, manganese, lead, and cadmium are other minerals of choice. Hyperaccumulators run the entire range of the plant world. They may be herbs, shrubs, or trees. Many members of the mustard family, spurge family, legume family, and grass family are top hyperaccumulators. Many are found in tropical and subtropical areas of the world, where accumulation of high concentrations of metals may afford some protection against plant-eating insects and microbial pathogens.

Only recently have investigators considered using these plants to clean up soil and waste sites that have been contaminated by toxic levels of heavy metals – an environmentally friendly approach known as phytoremediation. This scenario begins with the planting of hyperaccumulating species in the target area, such as an abandoned mine or an irrigation pond contaminated by runoff. Toxic minerals would first be absorbed by roots but later relocated to the stem and leaves. A harvest of the shoots would remove the toxic compounds off site to be burned or composted to recover the metal for industrial uses. After several years of cultivation and harvest, the site would be restored at a cost much lower than the price of excavation and reburial, the standard practice for remediation of contaminated soils. For example, in field trials, the plant alpine pennycress removed zinc and cadmium from soils near a zinc smelter, and Indian mustard, native to Pakistan and India, has been effective in reducing levels of selenium salts by 50 percent in contaminated soils.

TPO5: The Origin of the Pacific Island People

     The greater Pacific region, traditionally called Oceania, consists of three cultural areas: Melanesia, Micronesia, and Polynesia. Melanesia, in the southwest Pacific, contains the large islands of New Guinea, the Solomons, Vanuatu, and New Calodonia. Mircronesia, the area north of Melanesia, consists primarily of small scattered islands. Polynesia is the central Pacific area in the great triangle defined by Hawaii, Easter Island, and New Zealand. Before the arrival of Europeans, the islands in the two largest cultural areas, Polynesia and Micronesia, together contained a population estimated at 700,000.

     Speculation on the origin of these Pacific islanders began as soon as outsiders encountered them, in the absence of solid linguistic, archaeological, and biological data, many fanciful and mutually exclusive theories were devised. Pacific islanders were variously thought to have come from North America, South America, Egypt, Israel, and India, as well as Southeast Asia. Many older theories implicitly deprecated the navigational abilities and overall cultural creativity of the Pacific islanders. For example, British anthropologists G. Elliot Smith and W. J. Perry assumed that only Egyptians would have been skilled enough to navigate and colonize the Pacific. They inferred that the Egyptians even crossed the Pacific to found great civilizations of the New World (North and South America). In 1947, Norwegian adventurer Thor Heyerdahl drifted on a balsa-log raft westward with the winds and currents across the Pacific from South America to prove his theory that Pacific islanders were Native Americans (also called American Indians). Later Heyerdahl suggested that the Pacific was peopled by three migrations: by Native Americans from the Pacific Northwest of North America drifting to Hawaii, by Peruvians drifting to Easter Island, and by Melanesians. In 1969 he crossed the Atlantic in an Egyptian-style reed boat to prove Egyptian influences in the Americas. Contrary to these theorists, the overwhelming of physical anthropology, linguistics, and archeology shows that the Pacific islanders came from Southwest Asia and were skilled enough as navigators to sail against the prevailing winds and currents.

     The basic cultural requirements for the successful colonization of the Pacific islands include the appropriate boat-building, sailing, and navigation skills to get to the islands in the first place, domesticated plants and gardening skills suited to often marginal conditions, and a varied inventory of fishing implements and techniques. It is now generally believed that these prerequisites originated with peoples speaking Austronesian languages (a group of several hundred related languages) and began to emerge in Southwest Asia by about 5000 B.C.E. The culture of that time, based on archaeology and linguistic reconstruction, is assumed to have had a broad inventory of cultivated plants including taro, yams, banana, sugarcane, breadfruit, coconut, sago, and rice. Just as important, the culture also possessed the basic foundation for an effective maritime adaptation including outrigger canoes and a variety of fishing techniques that could be effective for overseas voyaging.

     Contrary to the arguments of some that much of the Pacific was settled by Polynesians accidentally marooned after being lost and adrift, it seems reasonable that this feat was accomplished by deliberate colonization expeditions that set out fully stocked with food and domesticated plants and animals. Detailed studies of the winds and currents using computer simulations suggest that drifting canoes would have been a most unlikely means of colonizing the Pacific. These expeditions were likely driven by population growth and political dynamics on the home islands, as well as the challenge and excitement of exploring unknown waters. Because all Polynesians, Micronesians, and many Melanesians speak Austronesian languages and grow crops derived from Southwest Asia, all these peoples most certainly derived from that region and not the New World or elsewhere. The undisputed pre-Columbian presence in Oceania of the sweet potato, which is a New World domesticate, has sometimes been used to support Heyerdahl’s “American Indians in the Pacific” theories. However, this is one plant out of a long list of Southwest Asian domesticates. As Patrick Kirch, an American anthropologist, points out, rather than being brought by rafting South Americans, sweet potatoes might just have easily been brought back by returning Polynesian navigators who could have reached the west coast of South America.

TPO5: The Cambrian Explosion

      The geologic timescale is marked by significant geologic and biological events, including the origin of Earth about 4.6 billion years ago, the origin of life about 3.5 billion years ago, the origin of eukaryotic life-forms (living things that have cells with true nuclei) about 1.5 billion years ago, and the origin of animals about 0.6 billion years ago. The last event marks the beginning of the Cambrian period. Animals originated relatively late in the history of Earth—in only the last 10 percent of Earth’s history. During a geologically brief 100-million-year period, all modern animal groups (along with other animals that are now extinct) evolved. This rapid origin and diversification of animals is often referred to as “the Cambrian explosion”.

      Scientists have asked important questions about this explosion for more than a century. Why did it occur so late in the history of Earth? The origin of multicellular forms of life seems a relatively simple step compared to the origin of life itself. Why does the fossil record not document the series of evolutionary changes during the evolution of animals? Why did animal life evolve so quickly? Paleontologists continue to search the fossil record for answers to these questions.

       One interpretation regarding the absence of fossils during this important 100-million-year period is that early animals were soft bodied and simply did not fossilize. Fossilization of soft-bodied animals is less likely than fossilization of hard-bodied animals, but it does occur. Conditions that promote fossilization of soft-bodied animals include very rapid covering by sediments that create an environment that discourages decomposition. In fact, fossil beds containing soft-bodied animals have been known for many years. 

      The Ediacara fossil formation, which contains the oldest known animal fossils, consists exclusively of soft-bodied forms. Although named after a site in Australia, the Ediacara formation is worldwide in distribution and dates to Precambrian times. This 700-million-year-old formation gives few clues to the origins of modern animals, however, because paleontologists believe it represents an evolutionary experiment that failed. It contains no ancestors of modern animal groups.

      A slightly younger fossil formation containing animal remains is the Tommotian formation, named after a locale in Russia. It dates to the very early Cambrian period, and it also contains only soft-bodied forms. At one time, the animals present in these fossil beds were assigned to various modern animal groups, but most paleontologists now agree that all Tommotian fossils represent unique body forms that arose in the early Cambrian period and disappeared before the end of the period, leaving no descendants in modern animal groups.

      A third fossil formation containing both soft-bodied and hard-bodied animals provides evidence of the result of the Cambrian explosion. This fossil formation, called the Burgess Shale, is in Yoho National Park in the Canadian Rocky Mountains of British Columbia. Shortly after the Cambrian explosion, mud slides rapidly buried thousands of marine animals under conditions that favored fossilization. These fossil beds provide evidence of about 32 modern animal groups, plus about 20 other animal body forms that are so different from any modern animals that they cannot be assigned to any one of the modern groups. These unassignable animals include a large swimming predator called Anomalocaris and a soft-bodied animal called Wiwaxia, which ate detritus or algae. The Burgess Shale formation also has fossils of many extinct representatives of modern animal groups. For example, a well-known Burgess Shale animal called Sidneyia is a representative of a previously unknown group of arthropods (a category of animals that includes insects, spiders, mites, and crabs).      

Fossil formations like the Burgess Shale show that evolution cannot always be thought of as a slow progression. The Cambrian explosion involved rapid evolutionary diversification, followed by the extinction of many unique animals. Why as this evolution so rapid? No one really knows. Many zoologists believe that it was because so many ecological niches were available with virtually no competition from existing species. Will zoologists ever know the evolutionary sequences in the Cambrian explosion? Perhaps another ancient fossil bed of soft-bodied animals from 600-million-year-old seas is awaiting discovery. 

TPO6

TPO6: Powering the industrial revolution

In Britain, one of most dramatic changes of the Industrial Revolution was the harnessing of power. Until the reign of George Ⅲ (1760-1820), available sources of power for work and travel had not increased since the Middle Ages. There were three sources of power, animal or human muscles; the wind, operating on sail or windmill; and running water. Only the last of these was suited at all to the continuous operating of machines, and although waterpower abounded in Lancashire and Scotland and ran grain mills as well as textile mills, it had one great disadvantage: Streams flowed where nature intended them to, and water-driven factories had to be located on their banks, whether or not the location was desirable for other reasons. Furthermore, even the most reliable waterpower varied with the seasons and disappeared in a drought. The new age of machinery, in short, could not have been born without a new source of both movable and constant power. 

The source had long been known but not exploited. Early in the century, a pump had come into use in which expanding steam raised a piston in a cylinder, and atmospheric pressure brought it down again when the steam condensed inside the cylinder to form a vacuum. This “atmospheric engine”, invented by Tomas Savey and vastly improved by his partner Thomas Newcomen, embodied revolutionary principles, but it was so slow and wasteful of fuel that it could not be employed outside the coal mines for which it had been designed. In the 1760s, James Watt perfected a separate condenser for the steam, so that the cylinder did not have to be cooled at every stroke; then he devised a way to make the piston turn a wheel and thus convert reciprocating (back and forth) motion into rotary motion. He thereby transformed an inefficient pump of limited use into a steam engine of a thousand uses. The final step came when steam was introduced into the cylinder to drive the piston background as well as forward, thereby increasing the speed of the engine and cutting its fuel consumption. 

Watt’s steam engine soon showed what it could do. It liberated industry from dependence on running water. The engine eliminated water in the mines by driving efficient pumps, which made possible deeper and deeper mining. The ready availability of coal inspired William Murdoch during the 1790s to develop the first new form of nighttime illumination to be discovered in a millennium and a half. Coal gas rivaled smoky oil lamps and flickering candles, and early in the new century, well-to-do Londoners grew accustomed to gaslights houses and even streets. Iron manufacturers, which had starved for fuel while depending on charcoal, also benefited from ever-increasing supplies of coal: blast furnaces with steam-powered bellows turned out more iron and steel for the new machinery. Steam became the motive force of the Industrial Revolution, as coal and iron ore were the raw materials. 

By 1800 more than a thousand steam engines were in use in the British Isles, and Britain retained a virtual monopoly on steam engine production until the 1830s. Steam power did not merely spin cotton and roll iron; early in the new century, it also multiplied ten times over the amount of paper that a single worker could produce in a day. At the same time, operators of the first printing presses run by steam rather than by hand found it possible to produce a thousand pages in an hour rather than thirty. Steam also promised to eliminate a transportation problem not fully solved by either canal boats or turnpikes. Boats could carry heavy weights, but canals could not cross hilly terrain; Turnpikes could cross the hills, but the roadbeds could not stand up under great weights. These problems needed still another solution, and the ingredients for it lay close at hand. In some industrial regions, heavily laden wagons, with flanged wheels, were being hauled by horses along metal rails: and the stationary steam engine was puffing in the factory and mine. Another generation passed before inventors succeeded in combining these ingredients, by putting the engine on wheels and the wheels on the rails, so as to provide a machine to take the place of the horse. Thus the railroad age sprang from what had already happened in the eighteenth century. 

TPO6: William Smith

In 1769 in a little town in Oxfordshire, England, a child with the very ordinary name of William Smith was born into the poor family of a village backsmith. He received rudimentary village schooling, but mostly he roamed his uncle’s farm collecting the fossils that were so abundant in the rocks of the Cotswold hills. When he grew older, William Smith taught himself surveying from books he bought with his small savings, and at the end of eighteen he was apprenticed to a surveyor of the local parish. He then proceeded to teach himself geology, and when he was twenty-four, he went to work for the company that was excavating the Somerset Coal Canal in the south of England. 

This was before the steam locomotive, and canal building was at its height. The companies building the canals to transport coal needed surveyors to help them find the coal deposits worth mining as well as to determine the best courses for the canals. This job gave Smith an opportunity to study the fresh rock outcrops created by the newly dug canal. He later worked on similar jobs across the length and breadth of England, all the while studying the newly revealed strata and collecting all the fossils he could find. Smith used mail coaches to travel as much as 10,000 miles per year. In 1815 he published the first modern geological map, ” A map of the Strata of England and Wales with a part of Scotland, ” a map so meticulously researched that it can still be used today.

In 1831 when Smith was finally recognized by the Geological Society of London as the “father of English geology,” it was not only for his maps but also for something even more important. Ever since people had begun to catalog the strata in particular outcrops, there had been the hope that these could somehow be used to calculate geological time. But as more and more accumulations of strata were cataloged in more and more places, it became clear that the sequences of rocks sometimes differed from region to region and that no rock type was ever going to become a reliable time maker throughout the world. Even without the problem of regional differences, rocks present a difficulty as unique time makers. Quartz is quartz- a silicon ion surrounded by four oxygen ions-there’s no difference at all between two-million-year-old Pleistocene quartz and Cambrian quartz created over 500 million years ago. 

As he collected fossils from strata throughout England, Smith began to see the fossils told a different story from the rocks. Particularly in the younger strata, the rocks were often so similar that he had trouble distinguishing the strata, but he never had trouble telling the fossils apart. While rock between two consistent strata might in one place be shale and in another sandstone, the fossils in that shale or sandstone were always the same. Some fossils endured through so many millions of years that they appear in many strata, but others occur only in a few strata, and a few species had their births and extinctions within one particular stratum. Fossils are thus identifying markers for particular periods in Earth’s history. 

Not only could Smith identify rock strata by the fossils they contained, he could see a pattern emerging : certain fossils always appear in more ancient sediments, while others begin to be seen as the strata become more recent.  By following the fossils, Smith was able to put all the strata of England’s earth into relative temporal sequence. About the same time, Georges Cuvier made the same discovery while studying the rocks around Paris. Soon it was realized that this principle of faunal ( animal ) succession was valid not only in England or France but virtually everywhere. It was actually a principle of floral succession as well, because plants showed the same transformation through time as did fauna. Limestone may be found in the Cambrian or -300 million years later- in the Jurassic strata, but a trilobite- the ubiquitous marine arthropod that had its birth in the Cambrian- will never be found in Jurassic strata, not a dinosaur in the Cambrian.

TPO6: Infantile Amnesia

What do you remember about your life before you were three? Few people can remember anything that happened to them in their early years. Adults’ memories of the next few years also tend to be scanty.  Most people remember only a few events-usually ones that were meaningful and distinctive, such as being hospitalized or a sibling’s birth.  

How might this inability to recall early experiences be explained? The sheer passage of time does not account for it, adults have excellent recognition of pictures of people who attended high school with them 35 years earlier. Another seemingly plausible explanation-that infants do not form enduring memories at this point in development-also is incorrect. Children two and a half to three years old remember experiences that occurred in their first year, and eleven month olds remember some events a year later. Nor does the hypothesis that infantile amnesia reflects repression-or holding back-of sexually charged episodes explain the phenomenon. While such repression may occur, people cannot remember ordinary events from the infant and toddler periods, either.

Three other explanations seem more promising. One involves physiological changes relevant to memory. Maturation of the frontal lobes of the brain continues throughout early childhood, and this part of the brain may be critical for remembering particular episodes in ways that can be retrieved later. Demonstrations of infants’ and toddlers’ long-term memory have involved their repeating motor activities that they had seen or done earlier, such as reaching in the dark for objects, putting a bottle in a doll’s mouth, or pulling apart two pieces of a toy. The brain’s level of physiological maturation may support these types of memories, but not ones requiring explicit verbal descriptions. 

A second explanation involves the influence of the social world on children’s language use. Hearing and telling stories about events may help children store information in ways that will endure into later childhood and adulthood. Through hearing stories with a clear beginning, middle, and ending, children may learn to extract the gist of events in ways that they will be able to describe many years later. Consistent with this view, parents and children increasingly engage in discussions of past events when children are about three years old. However, hearing such stories is not sufficient for younger children to form enduring memories. Telling such stories to two year olds does not seem to produce long-lasting verbalizable memories.

A third likely explanation for infantile amnesia involves incompatibilities between the ways in which infants encode information and the ways in which older children and adults retrieve it. Whether people can remember an event depends critically on the fit between the way in which they earlier encoded the information and the way in which they later attempt to retrieve it. The better able the person is to reconstruct the perspective from which the material was encoded, the more likely that recall will be successful.

This view is supported by a variety of factors that can create mismatches between very young children’s encoding and older children’s and adults’ retrieval efforts. The world looks very different to a person whose head is only two or three feet above the ground than to one whose is five or six feet above it. Older children and adults often try to retrieve the names of things they saw, but infants would not have encoded the information verbally. General knowledge of categories of events such as a birthday party or a visit to the doctor’s office helps older individuals encode their experiences, but again, infants and toddlers are unlikely to encode many experiences within such knowledge structures.

These three explanations of infantile amnesia are not mutually exclusive: indeed, they support each other. Physiological immaturity may be part of my why infants and toddlers do not form extremely enduring memories, even when they hear stories that promote such remembering in preschoolers. Hearing the stories may lead preschoolers to encode aspects of events that allow them to form memories they can access as adults. Conversely, improved encoding of what they hear may help them better understand and remember stories and thus make the stories more useful for remembering future events. Thus, all three explanations -physiological maturation, hearing and producing stories about past events, and improved encoding of key aspects of events-seem likely to be involved in overcoming infantile amnesia.

  

TPO7

TPO7: The Geologic History of the Mediterranean

In 1970 geologists Kenneth J. Hsu and William B. F. Ryan were collecting research data while aboard the oceanographic research vessel Glomar Challenger. An objective of this particular cruise was to investigate the floor of the Mediterranean and to resolve questions about its geologic history. One question was related to evidence that the invertebrate fauna (animals without spines) of the Mediterranean had changed abruptly about 6 million years ago. Most of the older organisms were nearly wiped out, although a few hardy species survived. A few managed to migrate into the Atlantic. Somewhat later, the migrants returned, bringing new species with them. Why did the near extinction and migrations occur?

Another task for the Glomar Challenger’s scientists was to try to determine the origin of the domelike masses buried deep beneath the Mediterranean seafloor. These structures had been detected years earlier by echo-sounding instruments, but they had never been penetrated in the course of drilling. Were they salt domes such as are common along the United States Gulf Coast, and if so, why should there have been so much solid crystalline salt beneath the floor of the Mediterranean?

With questions such as these clearly before them, the scientists aboard the Glomar Challenger proceeded to the Mediterranean to search for the answers. On August 23,1970, they recovered a sample. The sample consisted of pebbles of hardened sediment that had once been soft, deep-sea mud, as well as granules of gypsum and fragments of volcanic rock. Not a single pebble was found that might have indicated that the pebbles came from the nearby continent. In the days following, samples of solid gypsum were repeatedly brought on deck as drilling operations penetrated the seafloor. Furthermore, the gypsum was found to possess peculiarities of composition and structure that suggested it had formed on desert flats. Sediment above and below the gypsum layer contained tiny marine fossils, indicating open-ocean conditions. As they drilled into the central and deepest part of the Mediterranean basin, the scientists took solid, shiny, crystalline salt from the core barrel. Interbedded with the salt were thin layers of what appeared to be windblown silt.

The time had come to formulate a hypothesis. The investigators theorized that about 20 million years ago, the Mediterranean was a broad seaway linked to the Atlantic by two narrow straits. Crustal movements closed the straits, and the landlocked Mediterranean began to evaporate. Increasing salinity caused by the evaporation resulted in the extermination of scores of invertebrate species. Only a few organisms especially tolerant of very salty conditions remained. As evaporation continued, the remaining brine (salt water) became so dense that the calcium sulfate of the hard layer was precipitated. In the central deeper part of the basin, the last of the brine evaporated to precipitate more soluble sodium chloride (salt). Later, under the weight of overlying sediments, this salt flowed plastically upward to form salt domes. Before this happened, however, the Mediterranean was a vast desert 3,000 meters deep. Then, about 5.5 million years ago came the deluge. As a result of crustal adjustments and faulting, the Strait of Gibraltar, where the Mediterranean now connects to the Atlantic, opened, and water cascaded spectacularly back into the Mediterranean. Turbulent waters tore into the hardened salt flats, broke them up, and ground them into the pebbles observed in the first sample taken by the Challenger. As the basin was refilled, normal marine organisms returned. Soon layers of oceanic ooze began to accumulate above the old hard layer.

The salt and gypsum, the faunal changes, and the unusual gravel provided abundant evidence that the Mediterranean was once a desert

 

TPO7: Ancient Rome and Greece

There is a quality of cohesiveness about the Roman world that applied neither to Greece nor perhaps to any other civilization, ancient or modem. Like the stones of a Roman wall, which were held together both by the regularity of the design and by that peculiarly powerful Roman cement, so the various parts of the Roman realm were bonded into a massive, monolithic entity by physical, organizational, and psychological controls. The physical bonds included the network of military garrisons, which were stationed in every province, and the network of stone-built roads that linked the provinces with Rome. The organizational bonds were based on the common principles of law and administration and on the universal army of officials who enforced common standards of conduct. The psychological controls were built on fear and punishment—on the absolute certainty that anyone or anything that threatened the authority of Rome would be utterly destroyed.

The source of the Roman obsession with unity and cohesion may well have lain in the pattern of Rome’s early development. Whereas Greece had grown from scores of scattered cities, Rome grew from one single organism. While the Greek world had expanded along the Mediterranean sea lanes, the Roman world was assembled by territorial conquest. Of course, the contrast is not quite so stark: in Alexander the Great the Greeks had found the greatest territorial conqueror of all time; and the Romans, once they moved outside Italy, did not fail to learn the lessons of sea power. Yet the essential difference is undeniable. The key to the Greek world lay in its high-powered ships; the key to Roman power lay in its marching legions. The Greeks were wedded to the sea; the Romans, to the land. The Greek was a sailor at heart; the Roman, a landsman.

Certainly, in trying to explain the Roman phenomenon, one would have to place great emphasis on this almost animal instinct for the territorial imperative. Roman priorities lay in the organization, exploitation, and defense of their territory. In all probability it was the fertile plain of Latium, where the Latins who founded Rome originated, that created the habits and skills of landed settlement, landed property, landed economy, landed administration, and a land-based society. From this arose the Roman genius for military organization and orderly government. In turn, a deep attachment to the land, and to the stability which rural life engenders, fostered the Roman virtues: gravitas, a sense of responsibility, peitas, a sense of devotion to family and country, and iustitia, a sense of the natural order.

Modern attitudes to Roman civilization range from the infinitely impressed to the thoroughly disgusted. As always, there are the power worshippers, especially among historians, who are predisposed to admire whatever is strong, who feel more attracted to the might of Rome than to the subtlety of Greece. At the same time, there is a solid body of opinion that dislikes Rome. For many, Rome is at best the imitator and the continuator of Greece on a larger scale. Greek civilization had quality; Rome, mere quantity. Greece was original; Rome, derivative. Greece had style; Rome had money. Greece was the inventor; Rome, the research and development division. Such indeed was the opinion of some of the more intellectual Romans. “Had the Greeks held novelty in such disdain as we,” asked Horace in his Epistles, “what work of ancient date would now exist?”

Rome’s debt to Greece was enormous. The Romans adopted Greek religion and moral philosophy. In literature, Greek writers were consciously used as models by their Latin successors. It was absolutely accepted that an educated Roman should be fluent in Greek. In speculative philosophy and the sciences, the Romans made virtually no advance on early achievements.

Yet it would be wrong to suggest that Rome was somehow a junior partner in Greco-Roman civilization The Roman genius was projected into new spheres—especially into those of law, military organization, administration, and engineering Moreover, the tensions that arose within the Roman state produced literary and artistic sensibilities of the highest order. It was no accident that many leading Roman soldiers and statesmen were writers of high caliber.

 

TPO7: Agriculture, Iron, and the Bantu Peoples

There is evidence of agriculture in Africa prior to 3000 B.C. It may have developed independently, but many scholars believe that the spread of agriculture and iron throughout Africa linked it to the major centers of the Near East and Mediterranean world. The drying up of what is now the Sahara desert had pushed many peoples to the south into sub-Saharan Africa. These peoples settled at first in scattered hunting-and-gathering bands, although in some places near lakes and rivers, people who fished, with a more secure food supply, lived in larger population concentrations. Agriculture seems to have reached these people from the Near East, since the first domesticated crops were millets and sorghums whose origins are not African but West Asian. Once the idea of planting diffused, Africans began to develop their own crops, such as certain varieties of rice, and they demonstrated a continued receptiveness to new imports. The proposed areas of the domestication of African crops lie in a band that extends from Ethiopia across southern Sudan to West Africa. Subsequently, other crops, such as bananas, were introduced from Southeast Asia.

Livestock also came from outside Africa. Cattle were introduced from Asia, as probably were domestic sheep and goats. Horses were apparently introduced by the Hyksos invaders of Egypt (1780-1560 B.C.) and then spread across the Sudan to West Africa. Rock paintings in the Sahara indicate that horses and chariots were used to traverse the desert and that by 300-200 B.C., there were trade routes across the Sahara. Horses were adopted by peoples of the West African savannah, and later their powerful cavalry forces allowed them to carve out large empires. Finally, the camel was introduced around the first century AD. This was an important innovation, because the camel’s ability to thrive in harsh desert conditions and to carry large loads cheaply made it an effective and efficient means of transportation. The camel transformed the desert from a barrier into a still difficult, but more accessible, route of trade and communication.

Iron came from West Asia, although its routes of diffusion were somewhat different than those of agriculture. Most of Africa presents a curious case in which societies moved directly from a technology of stone to iron without passing through the intermediate stage of copper or bronze metallurgy, although some early copper-working sites have been found in West Africa. Knowledge of iron making penetrated into the forests and savannahs of West Africa at roughly the same time that iron making was reaching Europe. Evidence of iron making has been found in Nigeria, Ghana, and Mali.

This technological shift caused profound changes in the complexity of African societies. Iron represented power. In West Africa the blacksmith who made tools and weapons had an important place in society, often with special religious powers and functions. Iron hoes, which made the land more productive, and iron weapons, which made the warrior more powerful, had symbolic meaning in a number of West African societies. Those who knew the secrets of making iron gained ritual and sometimes political power.

Unlike in the Americas, where metallurgy was a very late and limited development, Africans had iron from a relatively early date, developing ingenious furnaces to produce the high heat needed for production and to control the amount of air that reached the carbon and iron ore necessary for making iron. Much of Africa moved right into the Iron Age, taking the basic technology and adapting it to local conditions and resources.

The diffusion of agriculture and later of iron was accompanied by a great movement of people who may have carried these innovations. These people probably originated in eastern Nigeria. Their migration may have been set in motion by an increase in population caused by a movement of peoples fleeing the desiccation, or drying up, of the Sahara. They spoke a language, proto-Bantu (“bantu” means “the people”), which is the parent tongue of a large number of Bantu languages still spoken throughout sub-Saharan Africa. Why and how these people spread out into central and southern Africa remains a mystery, but archaeologists believe that their iron weapons allowed them to conquer their hunting-gathering opponents, who still used stone implements. Still, the process is uncertain, and peaceful migration—or simply rapid demographic growth—may have also caused the Bantu explosion.

TPO 8

TPO8: The Rise of Teotihuacán

The city of Teotihuacan, which lay about 50 kilometers northeast of modern-day Mexico City, began its growth by 200 -100 B.C. At its height, between about AD. 150 and 700, it probably had a population of more than 125,000 people and covered at least 20 square kilometers. It had over 2,000 apartment complexes, a great market, a large number of industrial workshops, an administrative center, a number of massive religious edifices, and a regular grid pattern of streets and buildings. Clearly, much planning and central control were involved in the expansion and ordering of this great metropolis. Moreover, the city had economic and perhaps religious contacts with most parts of Mesoamerica (modern Central America and Mexico).

How did this tremendous development take place, and why did it happen in the Teotihuacan Valley? Among the main factors are Teotihuacan’s geographic location on a natural trade route to the south and east of the Valley of Mexico, the obsidian resources in the Teotihuacan Valley itself, and the valley’s potential for extensive irrigation. The exact role of other factors is much more difficult to pinpoint—for instance, Teotihuacan’s religious significance as a shrine, the historical situation in and around the Valley of Mexico toward the end of the first millennium B.C., the ingenuity and foresightedness of Teotihuacan’s elite, and, finally, the impact of natural disasters, such as the volcanic eruptions of the late first millennium B.C.

This last factor is at least circumstantially implicated in Teotihuacan’s rise Prior to

200 B.C., a number of relatively small centers coexisted in and near the Valley of Mexico. Around this time, the largest of these centers, Cuicuilco, was seriously affected by a volcanic eruption, with much of its agricultural land covered by lava. With Cuicuilco eliminated as a potential rival, any one of a number of relatively

modest towns might have emerged as a leading economic and political power in Central Mexico. The archaeological evidence clearly indicates, though, that Teotihuacan was the center that did arise as the predominant force in the area by the first century A.D.

It seems likely that Teotihuacan’s natural resources—along with the city elite’s ability to recognize their potential—gave the city a competitive edge over its neighbors. The valley, like many other places in Mexican and Guatemalan highlands, was rich in obsidian. The hard volcanic stone was a resource that had been in great demand for many years, at least since the rise of the Olmecs (a people who flourished between 1200 and 400 B.C.), and it apparently had a secure market. Moreover, recent research on obsidian tools found at Olmec sites has shown that some of the obsidian obtained by the Olmecs originated near Teotihuacan. Teotihuacan obsidian must have been recognized as a valuable commodity for many centuries before the great city arose.

Long-distance trade in obsidian probably gave the elite residents of Teotihuacan access to a wide variety of exotic goods, as well as a relatively prosperous life. Such success may have attracted immigrants to Teotihuacan. In addition, Teotihuacan’s elite may have consciously attempted to attract new inhabitants. It is also probable that as early as 200 B.C. Teotihuacan may have achieved some religious significance and its shrine (or shrines) may have served as an additional population magnet. Finally, the growing population was probably fed by increasing the number and size of irrigated fields

The picture of Teotihuacan that emerges is a classic picture of positive feedback among obsidian mining and working, trade, population growth, irrigation, and religious tourism. The thriving obsidian operation, for example, would necessitate more miners, additional manufacturers of obsidian tools, and additional traders to carry the goods to new markets. All this led to increased wealth, which in turn would attract more immigrants to Teotihuacan. The growing power of the elite, who controlled the economy, would give them the means physically coerce people to move to Teotihuacan and serve as additions to the labor force, More irrigation works would have to be built to feed the growing population, and this resulted in more power and wealth for the elite.

TPO8: Extinction of the Dinosaurs

Paleozoic Era   334 to 248 millions years ago

Masozoic Era    245 to 65 million years ago 

-Triassic Period 

-Jurassic Period 

-Cretaceous Period

Cenozoic Era    65 million years ago to the present

Paleontologists have argued for a long time that the demise of the dinosaurs was caused by climatic alterations associated with slow changes in the positions of continents and seas resulting from plate tectonics off and on throughout the Cretaceous (the last period of the Mesozoic era, during which dinosaurs flourished), large shallow seas covered extensive areas of the continents. Data from diverse sources, including geochemia evidence preserved in seafloor sediments, indicate that the Late Cretaceous climate was milder than today’s. The days were not too hot, nor the nights too cold. The summers were not too warm, nor the winters too frigid. The shallow seas on the continents probably buffered the temperature of the nearby air, keeping it relatively constant.

At the end of the Cretaceous, the geological record shows that these seaways retreated from the continents back into the major ocean basins. No one knows why over a period of about 100,000 years, while the seas pulled back, climates around the world became dramatically more extreme warmer days, cooler nights; hotter summers, colder winters. Perhaps dinosaurs could not tolerate these extreme temperature changes and became extinct

If true, though, why did cold-blooded animals such as snakes, lizards, turtles, and crocodiles survive the freezing winters and torrid summers? These animals are at the mercy of the climate to maintain a livable body temperature. It’s hard to understand why they would not be affected whereas dinosaurs were left too crippled to cope, especially if, as some scientists believe, dinosaurs were warm blooded Critics also point out that the shallow seaways had retreated from and advanced on the continents numerous times during the Mesozoic, so why did the dinosaurs survive the climatic changes associated with the earlier fluctuations but not with this one? Although initially appealing, the hypothesis of a simple climatic change related to sea levels is insufficient to explain all the data

Dissatisfaction with conventional explanations for dinosaur extinctions led to a surprising observation that, in turn, has suggested a new hypothesis. Many plants and animals disappear abruptly from the fossil record as one moves from layers of rock documenting the end of the Cretaceous up into rocks representing the beginning of the Cenozoic (the era after the Mesozoic). Between the last layer of Cretaceous rock and the first layer of Cenozoic rock, there is often a thin layer of clay. Scientists felt that they could get an idea of how long the extinctions took by determining how long it took to deposit this one centimeter of clay and they thought they could determine the time it took to deposit the clay by determining the amount of the element iridium (lr) it contained

Ir has not been common at Earth’s surface since the very beginning of the planets history. Because it usually exists in a metallic state, it was preferentially incorporated in Earth’s core as the planet cooled and consolidated. Ir is found in high concentrations in some meteorites, in which the solar system’s original chemical composition is preserved. Even today, microscopic meteorites continually bombard Earth, falling on both land and sea. By measuring how many of these meteorites fall to Earth over a given period of time, scientists can estimate how long it might have taken to deposit the observed amount of Ir in the boundary clay These calculations suggest that a period of about one million years would have been required. However, other reliable evidence suggests that the deposition of the boundary clay could not have taken one million years. So the unusually high concentration of Ir seems to require a special explanation

In view of these facts, scientists hypothesized that a single large asteroid about 10 to 15 kilometers across, collided with Earth, and the resulting fallout created the boundary clay. Their calculations show that the impact kicked up a dust cloud that cut off sunlight for several months, inhibiting photosynthesis in plants; decreased surface

temperatures on continents to below freezing; caused extreme episodes acid rain               and significantly raised long-term global temperatures through the greenhouse effect. This disruption of food chain and climate would have eradicated the dinosaurs and other organisms in less than fifty years. 

TPO8: Running Water on Mars?

Photographic evidence suggests that liquid water once existed in great quantity on the surface of Mars. Two types of flow features are seen: runoff channels and outflow channels. Runoff channels are found in the southern highlands. These flow features are extensive systems—sometimes hundreds of kilometers in total length—of interconnecting, twisting channels that seem to merge into larger, wider channels. They bear a strong resemblance to river systems on Earth, and geologists think that they are dried-up beds of long-gone rivers that once carried rainfall on Mars from the mountains down into the valleys. Runoff channels on Mars speak of a time 4 billion years ago (the age of the Martian highlands), when the atmosphere was thicker, the surface warmer, and liquid water widespread.

Outflow channels are probably relics of catastrophic flooding on Mars long ago. They appear only in equatorial regions and generally do not form extensive interconnected networks. Instead, they are probably the paths taken by huge volumes of water draining from the southern highlands into the northern plains. The onrushing water arising from these flash floods likely also formed the odd teardrop-shaped “islands” (resembling the miniature versions seen in the wet sand of our beaches at low tide) that have been found on the plains close to the ends of the outflow channels. Judging from the width and depth of the channels, the flow rates must have been truly enormous—perhaps as much as a hundred times greater than the 105 tons per second carried by the great Amazon river. Flooding shaped the outflow channels approximately 3 billion years ago, about the same time as the northern volcanic plains formed.

Some scientists speculate that Mars may have enjoyed an extended early period during which rivers, lakes, and perhaps even oceans adorned its surface. A 2003 Mars Global Surveyor image shows what mission specialists think may be a delta—a fan-shaped network of channels and sediments where a river once flowed into a larger body of water, in this case a lake filling a crater in the southern highlands. Other researchers go even further, suggesting that the data provide evidence for large open expanses of water on the early Martian surface. A computer-generated view of the Martian north polar region shows the extent of what may have been an ancient ocean covering much of the northern lowlands. The Hellas Basin, which measures some 3,000 kilometers across and has a floor that lies nearly 9 kilometers below the basin’s rim, is another candidate for an ancient Martian sea.

These ideas remain controversial. Proponents point to features such as the terraced “beaches” shown in one image, which could conceivably have been left behind as a lake or ocean evaporated and the shoreline receded. But detractors maintain that the terraces could also have been created by geological activity, perhaps related to the geologic forces that depressed the Northern Hemisphere far below the level of the south, in which case they have nothing whatever to do with Martian water. Furthermore, Mars Global Surveyor data released in 2003 seem to indicate that the Martian surface contains too few carbonate rock layers—layers containing compounds of carbon and oxygen—that should have been formed in abundance in an ancient ocean Then absence supports the picture of a cold, dry Mars that never experienced the extended mild period required to form lakes and oceans. However, more recent data imply that at least some parts of the planet did in fact experience long periods in the past during which liquid water existed on the surface

Aside from some small-scale gullies (channels) found since 2000. which are inconclusive, astronomers have no direct evidence for liquid water anywhere on the surface of Mars today, and the amount of water vapor in the Martian atmosphere is tiny. Yet even setting aside the unproven hints of ancient oceans, the extent outflow channels suggests that a huge total volume of water existed on Mars in the past. Where did all the water go? The answer may be that virtually all the water on Mars is now locked in the permafrost layer under the surface, with more contained in the planet’s polar caps.

TPO9

TPO9: Colonizing the Americas via the Northwest Coast

It has long been accepted that the Americas were colonized by a migration of peoples from Asia slowly traveling across a land bridge called Beringia (now the Bering Strait between northeastern Asia and Alaska) during the last Ice Age. ■The first water craft theory about this migration was that around 11,000-12,000 years ago there was an ice-free corridor stretching from eastern Beringia to tie areas of North America south of the great northern glaciers. It was this midcontinental corridor between two massive ice sheets—the Laurentide to the east and the Cordilleran to the west—that enabled the southward migration. ■But belief in this ice-free corridor began to crumble when paleoecologist Glen MacDonald demonstrated that some of the most important radiocarbon dates used to support the existence of an ice-free corridor were incorrect. ■He persuasively argued that such an ice-free corridor did not exist until much later, when the continental ice began its final retreat. ■

Support is growing for the alternative theory that people using watercraft, possibly skin boats, moved southward from Beringia along the Gulf of Alaska and then southward along the Northwest Coast of North America possibly as early as 16,000 years ago. This route would have enabled humans to enter southern areas of the Americas prior to the melting of the continental glaciers. Until the early 1970s, most archaeologists did not consider the coast a possible migration route into the Americas because geologists originally believed that during the last Ice Age the entire Northwest Coast was covered by glacial ice. It had been assumed that the ice extended westward from the Alaskan/Canadian mountains to the very edge of the continental shelf, the flat, submerged part of the continent that extends into the ocean. This would have created a barrier of ice extending from the Alaska Peninsula, through the Gulf of Alaska and southward along the Northwest Coast of North America to what is today the state of Washington.

The most influential proponent of the coastal migration route has been Canadian archaeologist Knut Fladmark. He theorized that with the use of watercraft, people gradually colonized unglaciated refuges and areas along the continental shelf exposed by the lower sea level. Fladmark’s hypothesis received additional support from the fact that the greatest diversity in Native American languages occurs along the west coast of the Americas, suggesting that this region has been settled the longest.

More recent geologic studies documented deglaciation and the existence of ice-free areas throughout major coastal areas of British Columbia, Canada, by 13,000 years ago. Research now indicates that sizable areas of southeastern Alaska along the inner continental shelf were not covered by ice toward the end of the last Ice Age. One study suggests that except for a 250-mile coastal area between southwestern British Columbia and Washington State, the Northwest Coast of North America was largely free of ice by approximately 16,000 years ago. Vast areas along the coast may have been deglaciated beginning around 16,000 years ago, possibly providing a coastal corridor for the movement of plants, animals, and humans sometime between 13,000 and 14,000 years ago.

The coastal hypothesis has gained increasing support in recent years because the remains of large land animals, such as caribou and brown bears, have been found in southeastern Alaska dating between 10.000 and 12,500 years ago. This is the time period in which most scientists formerly believed the area to be inhospitable for humans. It has been suggested that if the environment were capable of supporting breeding populations of bears, there would have been enough food resources to support humans. Fladmark and others believe that the first human colonization of America occurred by boat along the Northwest Coast during the very late Ice Age. possibly as early as 14,000 years ago. The most recent geologic evidence indicates that it may have been possible for people to colonize ice-free regions along the continental shelf that were still exposed by the lower sea level between 13,000 and 14,000 years ago.

The coastal hypothesis suggests an economy based on manne mammal hunting, saltwater fishing, shellfish gathering, and the use of watercraft. Because of the barrier of ice to the east, the Pacific Ocean to the west, and populated areas to the north, there may have been a greater impetus for people to move in a southerly direction.

TPO9: Reflection in Teaching

Teachers, it is thought, benefit from the practice of reflection, the conscious act of thinking deeply about and carefully examining the interactions and events within their own classrooms. Educators T. Wildman and J Niles (1987) describe a scheme for developing reflective practice in experienced teachers. This was justified by the view that reflective practice could help teachers to feel more intellectually involved in their role and work in teaching and enable them to cope with the paucity of scientific fact and the uncertainty of knowledge in the discipline of teaching.

Wildman and Niles were particularly interested in investigating the conditions under which reflection might flourish—a subject on which there is little guidance in the literature. They designed an experimental strategy for a group of teachers in Virginia and worked with 40 practicing teachers over several years. They were concerned that many would be “drawn to these new, refreshing conceptions of teaching only to find that the void between the abstractions and the realities of teacher reflection is too great to bridge. Reflection on a complex task such as teaching is not easy.” The teachers were taken through a program of talking about teaching events, moving on to reflecting about specific issues in a supported, and later an independent manner.

Wildman and Niles observed that systematic reflection on teaching required a sound ability to understand classroom events in an objective manner. They describe the initial understanding in the teachers with whom they were working as being “utilitarian… and not rich or detailed enough to drive systematic reflection.” Teachers rarely have the time or opportunities to view their own or the teaching of others in an objective manner. Further observation revealed the tendency of teachers to evaluate events rather than review the contributory factors in a considered manner by, in effect, standing outside the situation.

Helping this group of teachers to revise their thinking about classroom events became central. ■This process took time and patience and effective trainers. ■The researchers estimate that the initial training of the teachers to view events objectively took between 20 and 30 hours, with the same number of hours again being required to practice the skills of reflection.

■Wildman and Niles identify three principles that facilitate reflective practice in a teaching situation. ■The first is support from administrators in an education system, enabling teachers to understand the requirements of reflective practice and how it relates to teaching students. The second is the availability of sufficient time and space. The teachers in the program described how they found it difficult to put aside the immediate demands of others in order to give themselves the time they needed to develop their reflective skills. The third is the development of a collaborative environment with support from other teachers. Support and encouragement were also required to help teachers in the program cope with aspects of their professional life with which they were not comfortable. Wildman and Niles make a summary comment: “Perhaps the most important thing we learned is the idea of the teacher-as-reflective-practitioner will not happen simply because it is a good or even compelling idea.”

The work of Wildman and Niles suggests the importance of recognizing some of the difficulties of instituting reflective practice. Others have noted this, making a similar point about the teaching profession’s cultural inhibitions about reflective practice. Zeichner and Liston (1987) point out the inconsistency between the role of the teacher as a (reflective) professional decision maker and the more usual role of the teacher as a technician, putting into practice the ideas of others. More basic than the cultural issues is the matter of motivation. Becoming a reflective practitioner requires extra work (Jaworski, 1993) and has only vaguely defined goals with, perhaps, little initially perceivable reward and the threat of vulnerability. Few have directly questioned what might lead a teacher to want to become reflective. Apparently, the most obvious reason for teachers to work toward reflective practice is that teacher educators think it is a good thing. There appear to be many unexplored matters about the motivation to reflect—for example, the value of externally motivated reflection as opposed to that of teachers who might reflect by habit.

  

TPO9: The Arrival of Plant Life in Hawaii

When the Hawaiian Islands emerged from the sea as volcanoes, starting about five million years ago they were far removed from other landmasses. Then, as blazing sunshine alternated with drenching rains, the harsh, barren surfaces of the black rocks slowly began to soften. Winds brought a variety of life-forms.

Spores light enough to float on the breezes were carried thousands of miles from more ancient lands and deposited at random across the bare mountain flanks. A few of these spores found a toehold on the dark, forbidding rocks and grew and began to work their transformation upon the land. Lichens were probably the first successful flora. These are not single individual plants; each one is a symbiotic combination of an alga and a fungus. The algae capture the Sun’s energy by photosynthesis and store it in organic molecules. The fungi absorb moisture and mineral salts from the rocks, passing these on in waste products that nourish algae. It is significant that the earliest living things that built communities on these islands are examples of symbiosis, a phenomenon that depends upon the close cooperation of two or more forms of life and a principle that is very important in island communities.

Lichens helped to speed the decomposition of the hard rock surfaces, preparing a soft bed of soil that was abundantly supplied with minerals that had been carried in the molten rock from the bowels of Earth. Now, other forms of life could take hold: ferns and mosses (two of the most ancient types of land plants) that flourish’ even in rock crevices. ■These plants propagate by producing spores— tiny fertilized cells that contain all the instructions for making a new plant— but the spores are unprotected by any outer coating and carry no supply of nutrient. ■Vast numbers of them fall on the ground beneath the mother plants. ■Sometimes they are earned farther afield by water or by wind. ■But only those few spores that settle down in very favorable locations can start new life; the vast majority fall on barren ground. By force of sheer numbers, however, the mosses and ferns reached Hawaii, survived, and multiplied. Some species developed great size, becoming tree ferns that even now grow in the Hawaiian forests.

Many millions of years after ferns evolved (but long before the Hawaiian Islands were born from the sea), another kind of flora evolved on Earth: the seed-bearing plants. This was a wonderful biological invention .The seed has an outer coating that surrounds the genetic material of the new plant, and inside this covering is a concentrated supply of nutrients. Thus, the seed’s chances of survival are greatly enhanced over those of the naked spore. One type of seed-bearing plant, the angiosperm, includes all forms of blooming vegetation. In the angiosperm the seeds are wrapped in an additional layer of covering. Some of these coats are hard— like the shell of a nut— for extra protection. Some are soft and tempting, like a peach or a cherry. In some angiosperms the seeds are equipped with gossamer wings, like the dandelion and milkweed seeds. These new characteristics offered better ways for the seeds to move to new habitats. They could travel through the air, float in water, and lie dormant for many months.

Plants with large, buoyant seeds—like coconuts—drift on ocean currents and are washed up on the shores. Remarkably resistant to the vicissitudes of ocean travel, they can survive prolonged immersion in saltwater. When they come to rest on warm beaches and the conditions are favorable, the seed coats soften Nourished by their imported supply of nutrients, the young plants push out their roots and establish their place in the sun.

By means of these seeds, plants spread more widely to new locations, even to isolated islands like the Hawaiian archipelago, which lies more than 2,000 miles west of California and 3,500 miles east of Japan The seeds of grasses, flowers, and blooming trees made the long trips to these islands (Grasses are simple forms of angiosperms that bear their encapsulated seeds on long stalks.) In a surprisingly short time, angiosperms filled many of the land areas on Hawaii that had been bare.

TPO10

TPO10: Chinese Pottery

China has one of the world’s oldest continuous civilizations-despite invasions and occasional foreign rule. A country as vast as China with so long-lasting a civilization has a complex social and visual history, within which pottery and porcelain play a major role.

The function and status of ceramics in China varied from dynasty to dynasty, so they may be utilitarian, burial, trade, collectors’ or even ritual objects, according to their quality and the era in which they were made .The ceramics fall into three broad types-earthenware, stoneware, and porcelain-for vessels, architectural items such as roof tiles, and modeled objects and figures. In addition, there was an important group of sculptures made for religious use, the majority of which were produced in earthenware.

The earliest ceramics were fired to earthenware temperatures, but as early as the fifteenth century B C, high-temperature stoneware were being made with glazed surfaces. During the Six Dynasties period (A.D.265-589), kilns in north China were producing high-fired ceramics of good quality. Whitewares produced in Hebei and Henan provinces from the seventh to the tenth centuries evolved into the highly prized porcelains of the Song dynasty (A.D. 960-1279), long regarded as one of the high points in the history of China’s ceramic industry. The tradition of religious sculpture extends over most historical periods but is less clearly delineated than that of stonewares or porcelains, for it embraces the old custom of earthenware burial ceramics with later religious images and architectural ornament. Ceramic products also include lead-glazed tomb models of the Han dynasty, three-color lead-glazed vessels and figures of the Tang dynasty, and Ming three-color temple ornaments, in which the motifs were outlined in a raised trail of slip, as well as the many burial ceramics produced in imitation of vessels made in materials of higher intrinsic value.

Trade between the West and the settled and prosperous Chinese dynasties introduced new forms and different technologies. One of the most far-reaching examples is the impact of the fine ninth-century AD. Chinese porcelain wares imported into the Arab world. So admired were these pieces that they encouraged the development of earthenware made in imitation of porcelain and instigated research into the method of their manufacture. From the Middle East the Chinese acquired a blue pigment—a purified form of cobalt oxide unobtainable at that time in China—that contained only a low level of manganese. Cobalt ores found in China have a high manganese content, which produces a more muted blue-gray color. In the seventeenth century, the trading activities of the Dutch East India Company resulted in vast quantities of decorated Chinese porcelain being brought to Europe, which stimulated and influenced the work of a wide variety of wares, notably Delft. The Chinese themselves adapted many specific vessel forms from the West, such as bottles with long spouts, and designed a range of decorative patterns especially for the European market.

Just as painted designs on Greek pots may seem today to be purely decorative, whereas in fact they were carefully and precisely worked out so that at the time, their meaning was clear, so it is with Chinese pots. To twentieth-century eyes, Chinese pottery may appear merely decorative, yet to the Chinese the form of each object and its adornment had meaning and significance. The dragon represented the emperor, and the phoenix, the empress; the pomegranate indicated fertility, and a pair of fish, happiness; mandarin ducks stood for wedded bliss; the pine tree, peach, and crane are emblems of long life; and fish leaping from waves indicated success in the civil service examinations. Only when European decorative themes were introduced did these meanings become obscured or even lost.

From early times pots were used in both religious and secular contexts. The imperial court commissioned work and in the Yuan dynasty (A.D. 1279-1368) an imperial ceramic factory was establish at Jingdezhen. Pots played an important part in some religious ceremonies. Long and often lyrical descriptions of the different types of ware exist that assist in classifying pots, although these sometimes confuse an already large and complicated picture.

 

TPO10: Variations in the Climate

One of the most difficult aspects of deciding whether current climatic events reveal evidence of the impact of human activities is that it is hard to get a measure of what constitutes the natural variability of the climate. We know that over the past millennia the climate has undergone major changes without any significant human intervention. We also know that the global climate system is immensely complicated and that everything is in some way connected, and so the system is capable of fluctuating in unexpected ways. We need therefore to know how much the climate can vary of its own accord in order to interpret with confidence the extent to which recent changes are natural as opposed to being the result of human activities.

Instrumental records do not go back far enough to provide us with reliable measurements of global climatic variability on timescales longer than a century. What we do know is that as we include longer time intervals the record shows increasing evidence of slow swings in climate between different regimes. To build up a better picture of fluctuations appreciably further back in time requires us to use proxy records.

Over long periods of time, substances whose physical and chemical properties change with the ambient climate at the time can be deposited in a systematic way to provide a continuous record of changes in those properties over time, sometimes for hundreds or thousands of years. Generally, the layering occurs on an annual basis hence the observed changes in the records can be dated. Information on temperature rainfall, and other aspects of the climate that can be inferred from the systematic changes in properties is usually referred to as proxy data. Proxy temperature records have been reconstructed from ice core drilled out of the central Greenland ice cap, calcite shells embedded in layered lake sediments in Western Europe, ocean floor sediment cores from the tropical Atlantic Ocean, ice cores from Peruvian glaciers, and ice cores from eastern Antarctica. While these records provide broadly consistent indications that temperature variations can occur on a global scale, there are some intriguing differences, which suggest that the pattern of temperature variations in regional climates can also differ significantly from each other.

What the proxy records make abundantly clear is that there have been significant natural changes in the climate over timescales longer than a few thousand years. Equally striking, however, is the relative stability of the climate in the past 10,000 years (the Holocene period).

To the extent that the coverage of the global climate from these records can provide a measure of its true variability, it should at least indicate how all the natural causes of climate change have combined. These include the chaotic fluctuations of the atmosphere, the slower but equally erratic behavior of the oceans, changes in the land surfaces, and the extent of ice and snow. Also included will be any variations that have arisen from volcanic activity, solar activity, and, possibly, human activities.

 

One way to estimate how all the various processes leading to climate variability will combine is by using computer models of the global climate. They can do only so much to represent the full complexity of the global climate and hence may give only limited information about natural variability. Students suggest that to date the variability in computer simulations is considerably smaller than in data obtained from the proxy records.

In addition to the internal variability of the global climate system itself, there is the added factor of external influences, such as volcanoes and solar activity. There is a growing body of opinion that both these physical variations have a measurable impact on the climate. Thus we need to be able to include these in our deliberations. Some current analyses conclude that volcanoes and solar activity explain quite a considerable amount of the observed variability in the period from the seventeenth to the early twentieth centuries, but that they cannot be invoked to explain the rapid warning in recent decades.  

TPO10: Seventeenth-Century European Economic Growth

In the late sixteen century and into the seventeenth, Europe continued the growth that had lifted it out of the relatively less prosperous medieval period (from the mid 400s to the late 1400s). Among the key factors behind this growth were increased agricultural productivity and an expansion of trade

Populations cannot grow unless the rural economy can produce enough additional food to feed more people. During the sixteenth century, farmers brought more land into cultivation at the expense of forests and fens (low-lying wetlands). Dutch land reclamation in the Netherlands in the sixteenth and seventeenth centuries provides the most spectacular example of the expansion of farmland: the Dutch reclaimed more than 36,000 acres from 1590 to 1615 alone.

Much of the potential for European economic development lay in what at first glance would seem to have been only sleepy villages. Such villages, however, generally lay in regions of relatively advanced agricultural production, permitting not only the survival of peasants but also the accumulation of an agricultural surplus for investment. They had access to urban merchants, markets, and trade routes.

Increased agricultural production in turn facilitated rural industry, an intrinsic part of the expansion of industry. Woolens and textile manufacturers, in particular, utilized rural cottage (in-home) production, which took advantage of cheap and plentiful rural labor. In the German states, the ravages of the Thirty Years’ War (1618-1648) further moved textile production into the countryside. Members of poor peasant families spun or wove cloth and linens at home for scant remuneration in an attempt to supplement meager family income.

More extended trading networks also helped develop Europe’s economy in this period. English and Dutch ships carrying rye from the Baltic states reached Spain and Portugal. Population growth generated an expansion of small-scale manufacturing, particularly of handicrafts, textiles, and metal production in England, Flanders, parts of northern Italy, the southwestern German states, and parts of Spain. Only iron smelting and mining required marshaling a significant amount of capital (wealth invested to create more wealth).

The development of banking and other financial services contributed to the expansion of trade. By the middle of the sixteenth century, financiers and traders commonly accepted bills of exchange in place of gold or silver for other goods. Bills of exchange, which had their origins in medieval Italy, were promissory notes (written promises to pay a specified amount of money by a certain date) that could be sold to third parties.

■In this way, they provided credit. At mid-century, an Antwerp financier only slightly exaggerated when he claimed, “One can no more trade without bills of exchange than sail without water.” ■Merchants no longer had to carry gold and silver over long, dangerous journeys. ■ An Amsterdam merchant purchasing soap from a merchant in Marseille could go to an exchanger and pay the exchanger the equivalent sum in guilders, the Dutch currency. ■The exchanger would then send a bill of exchange to a colleague in Marseille, authorizing the colleague to pay the Marseille merchant in the merchant’s own currency after the actual exchange of goods had taken place.

Bills of exchange contributed to the development of banks, as exchangers began to provide loans. Not until the eighteenth century, however, did such banks as the Bank of Amsterdam and the Bank of England begin to provide capital for business investment. Their principal function was to provide funds for the state.

The rapid expansion in international trade also benefitted from an infusion of capital, stemming largely from gold and silver brought by Spanish vessels from the Americas This capital financed the production of goods storage, trade, and even credit across Europe and overseas. Moreover, an increased credit supply was generated by investments and loans by bankers and wealthy merchants to states and by joint-stock partnerships- an English innovation (the first major company began in 1600). Unlike short-term financial cooperation between investors for a single commercial undertaking, joint-stock companies provided permanent funding of capital by drawing on the investments of merchants and other investors who purchased shares in the company.

TPO11

Ancient Egyptian Sculpture

     In order to understand ancient Egyptian art, it is vital to know as much as possible of the elite Egyptians’ view of the world and the functions and contexts of the art produced for them. Without this knowledge we can appreciate only the formal content of Egyptian art, and we will fail to understand why it was produced or the concepts that shaped it and caused it to adopt its distinctive forms. In fact, a lack of understanding concerning the purposes of Egyptian art has often led it to be compared unfavorably with the art of other cultures. Why did the Egyptians not develop sculpture in which the body turned and twisted through space like classical Greek statuary? Why do the artists seem to get left and right confused? And why did they not discover the geometric perspective as European artists did in the Renaissance? The answer to such questions has nothing to do with a lack of skill or imagination on the part of Egyptian artists and everything to do with the purposes for which they were producing their art.

     The majority of three-dimensional representations, whether standing, seated, or kneeling, exhibit what is called frontality: they face straight ahead, neither twisting nor turning. When such statues are viewed in isolation, out of their original context and without knowledge of their function, it is easy to criticize them for their rigid attitudes that remained unchanged for three thousand years. Frontality is, however, directly related to the functions of Egyptian statuary and the contexts in which the statues were set up. Statues were created not for their decorative effect but to play a primary role in the cults of the gods, the king, and the dead. They were designed to be put in places where these beings could manifest themselves in order to be the recipients of ritual actions. Thus it made sense to show the statue looking ahead at what was happening in front of it, so that the living performer of the ritual could interact with the divine or deceased recipient. Very often such statues were enclosed in rectangular shrines or wall niches whose only opening was at the front, making it natural for the statue to display frontality. Other statues were designed to be placed within an architectural setting, for instance, in front of the monumental entrance gateways to temples known as pylons, or in pillared courts, where they would be placed against or between pillars: their frontality worked perfectly within the architectural context.

     Statues were normally made of stone, wood, or metal. Stone statues were worked from single rectangular blocks of material and retained the compactness of the original shape. The stone between the arms and the body and between the legs in standing figures or the legs and the seat in seated ones was not normally cut away. From a practical aspect this protected the figures against breakage and psychologically gives the images a sense of strength and power, usually enhanced by a supporting back pillar. By contrast, wooden statues were carved from several pieces of wood that were pegged together to from the finished work, and metal statues were either made by wrapping sheet metal around a wooden core or cast by the lost wax process. The arms could be held away from the body and carry separate items in their hands; there is no back pillar. The effect is altogether lighter and freer than that achieved in stone, but because both perform the same function, formal wooden and metal statues still display frontality.

     Apart from statues representing deities, kings, and named members of the elite that can be called formal, there is another group of three-dimensional representations that depicts generic figures, frequently servants, from the nonelite population. ■ The function of these is quite different. ■ Many are made to be put in the tombs of the elite in order to serve the tomb owners in the afterlife. ■ Unlike formal statues that are limited to static poses of standing, sitting, and kneeling, these figures depict a wide range of actions, such as grinding grain, baking bread, producing pots, and making music, and they are shown in appropriate poses, bending and squatting as they carry

out their tasks. ■

Orientation and Navigation

     To South Americans, robins are birds that fly north every spring. To North Americans, the robins simply vacation in the south each winter. Furthermore, they fly to very specific places in South America and will often come back to the same trees in North American yards the following spring. The question is not why they would leave the cold of winter so much as how they find their way around. The question perplexed people for years, until, in 1950’s, a German scientist named Gustave Kramer provided some answers and, in the process, raised new questions.

     Kramer initiated important new kinds of research regarding how animals orient and navigate. Orientation is simply facing in the right direction; navigation involves finding one’s way from point A to point B.

     Early in his research, Kramer found that caged migratory birds become very restless at about the time they would normally have begun migration in the wild. Furthermore, he noticed that as they fluttered around in the cage, they often launched themselves in the direction of their normal migratory route. He then set up experiments with caged starlings and found that their orientation was, in fact, in the proper migratory direction except when the sky was overcast, at which times there was no clear direction to their restless movements. Kramer surmised, therefore, that they were orienting according to the position of the Sun. To test this idea, he blocked their view of the Sun and used mirrors to change its apparent position. He found that under these circumstances, the birds oriented with respect to the new “Sun.” They seemed to be using the Sun as a compass to determine direction. At the time, this idea seemed preposterous. How could a bird navigate by the Sun when some of us lose our way with road maps? Obviously, more testing was in order.

     So, in another set of experiments, Kramer put identical food boxes around the cage, with food in only one of the boxes. ■  The boxes were stationary, and the one containing food was always at the same point of the compass. ■  However, its position with respect to the surroundings could be changed by revolving either the inner cage containing the birds or the outer walls, which served as the background. ■  As long as the birds could see the Sun, no matter how their surroundings were altered, they went directly to the correct food box. ■  Whether the box appeared in front of the right wall or the left wall, they showed no signs of confusion. On overcast days, however, the birds were disoriented and had trouble locating their food box.

     In experimenting with artificial suns, Kramer made another interesting discovery. If the artificial Sun remained stationary, the birds would shift their direction with respect to it at a rate of about 15 degrees per hour, the Sun’s rate of movement across the sky. Apparently, the birds were assuming that the “Sun” they saw was moving at that rate. When the real Sun was visible, however, the birds maintained a constant direction as it moved across the sky. In other words, they were able to compensate for the Sun’s movement. This meant that some sort of biological clock was operating – and a very precise clock at that.

     What about birds that migrate at night? Perhaps they navigate by the night sky. To test the idea, caged night-migrating birds were placed on the floor of a planetarium during their migratory period. A planetarium is essentially a theater with a domelike ceiling onto which a night sky can be projected for any night of the year. When the planetarium sky matched the sky outside, the birds fluttered in the direction of their normal migration. But when the dome was rotated, the birds changed their direction to match the artificial sky. The results clearly indicated that the birds were orienting according to the stars.

     There is accumulating evidence indicating that birds navigate by using a wide variety of environmental cues. Other areas under investigation include magnetism, landmarks, coastlines, sonar, and even smells. The studies are complicated by the fact that the data are sometimes contradictory and the mechanisms apparently change from time to time. Furthermore, one sensory ability may back up another.

 

Begging by Nesting

     Many signals that animals make seem to impose on the signalers costs that are overly damaging. ■ A classic example is noisy begging by nesting songbirds when a parent returns to the nest with food. ■ These loud cheeps and peeps might give the location of the nest away to a listening hawk or raccoon, resulting in the death of the defenseless nestlings. ■ In fact, when tapes of begging tree swallows were played at an artificial swallow nest containing an egg, the egg in that “noisy” nest was taken or destroyed by predators before the egg in a nearby quiet nest in 29 of 37 trials. ■

     Further evidence for the costs of begging comes from a study of differences in the begging calls of warbler species that nest on the ground versus those that nest in the relative safety of trees. The young of ground-nesting warblers produce begging cheeps of higher frequencies than do their tree-nesting relatives. These higher-frequency sounds do not travel as far, and so may better conceal the individuals producing them, who are especially vulnerable to predators in their ground nests. David Haskell created artificial nests with clay eggs and placed them on the ground beside a tape recorder that played the begging calls of either tree-nesting or of ground-nesting warblers. The eggs “advertised” by the tree-nesters’ begging calls were found bitten significantly more often than the eggs associated with the ground-nesters’ calls.

     The hypothesis that begging calls have evolved properties that reduce their potential for attracting predators yields a prediction: baby birds of species that experience high rates of nest predation should produce softer begging signals of higher frequency than nestlings of other species less often victimized by nest predators. This prediction was supported by data collected in one survey of 24 species from an Arizona forest, more evidence that predator pressure favors the evolution of begging calls that are hard to detect and pinpoint.

     Given that predators can make it costly to beg for food, what benefit do begging nestlings derive from their communications? One possibility is that a noisy baby bird provides accurate signals of its real hunger and good health, making it worthwhile for the listening parent to give it food in a nest where several other offspring are usually available to be fed. If this hypothesis is true, then it follows that nestlings should adjust the intensity of their signals in relation to the signals produced by their nestmates, who are competing for parental attention. When experimentally deprived baby robins are placed in a nest with normally fed siblings, the hungry nestlings beg more loudly than usual – but so do their better-fed siblings, though not as loudly as the hungrier birds.

     If parent birds use begging intensity to direct food to healthy offspring capable of vigorous begging, then parents should make food delivery decisions on the basis of their offspring’s calls. Indeed, if you take baby tree swallows out of a nest for an hour, feeding half the set and starving the other half, when the birds are replaced in the nest, the starved youngsters beg more loudly than the fed birds, and the parent birds feed the active beggars more than those who beg less vigorously.

     As these experiments show, begging apparently provides a signal of need that parents use to make judgments about which offspring can benefit most from a feeding. But the question arises, why don’t nestlings beg loudly when they aren’t all that hungry? By doing so, they could possibly secure more food, which should result in more rapid growth or larger size, either of which is advantageous. The answer lies apparently not in the increased energy costs of exaggerated begging – such energy costs are small relative to the potential gain in calories – but rather in the damage that any successful cheater would do to its siblings, which share genes with one another. An individual’s success in propagating his or her genes can be affected by more than just his or her own personal reproductive success. Because close relatives have many of the same genes, animals that harm their close relatives may in effect by destroying some of their own genes. Therefore, a begging nestling that secures food at the expense of its siblings might actually leave behind fewer copies of its genes overall than it might otherwise.

 

TPO12

Which Hand Did They Use?

      We all know that many more people today are right-handed than left-handed. Can one trace this same pattern far back in prehistory? ■ Much of the evidence about right-hand versus left-hand dominance comes from stencils and prints found in rock shelters in Australia and elsewhere, and in many Ice Age caves in France, Spain, and Tasmania. ■ When a left hand has been stenciled, this implies that the artist was right-handed, and vice versa. ■ Even though the paint was often sprayed on by mouth, one can assume that the dominant hand assisted in the operation. One also has to make the assumption that hands were stenciled palm downward – a left hand stenciled palm upward might of course look as if it were a right hand. ■ Of 158 stencils in the French cave of Gargas, 136 have been identified as left, and only 22 as right, right-handedness was therefore heavily predominant.

     Cave art furnishes other types of evidence of this phenomenon. Most engravings, for example, are best lit from the left, as befits the work of right-handed artists, who generally prefer to have the light source on the left so that the shadow of their hand does not fall on the tip of the engraving tool or brush. In the few cases where an Ice Age figure is depicted holding something, it is mostly, though not always, in the right hand.

     Clues to right-handedness can also be found by other methods. Right-handers tend to have longer, stronger, and more muscular bones on the right side, and Marcellin Boule as long ago as 1911 noted the La Chapelle-aux-Saints Neanderthal skeleton had a right upper arm bone that was noticeably stronger than the left. Similar observations have been made on other Neanderthal skeletons such as La Ferrassie I and Neanderthal itself.

     Fractures and other cut marks are another source of evidence. Right-handed soldiers tend to be wounded on the left. The skeleton of a 40- or 50-year-old Nabatean warrior, buried 2,000 years ago in the Negev Desert, Israel, had multiple healed fractures to the skull, the left arm, and the ribs.

     Tools themselves can be revealing. Long-handed Neolithic spoons of yew wood preserved in Alpine villages dating to 3000 B.C. have survived; the signs of rubbing on their left side indicate that their users were right-handed. The late Ice Age rope found in the French cave of Lascaux consists of fibers spiraling to the right, and was therefore tressed by a right-hander.

     Occasionally one can determine whether stone tools were used in the right hand or the left, and it is even possible to assess how far back this feature can be traced. In stone toolmaking experiments, Nick Toth, a right-hander, held the core (the stone that would become the tool) in his left hand and the hammer stone in his right. As the tool was made, the core was rotated clockwise, and the flakes, removed in sequence, had a little crescent of cortex (the core’s outer surface) on the side Toth’s knapping produced 56 percent flakes with the cortex on the right, and 44 percent left-oriented flakes. A left-handed toolmaker would produce the opposite pattern. Toth has applied these criteria to the similarly made pebble tools from a number of early sites (before 1.5 million years) at Koobi For a, Kenya, probably made by Homo habilis. At seven sites he found that 57 percent of the flakes were right-handed, and 43 percent left, a pattern almost identical to that produced today.

     About 90 percent of modern humans are right-handed: we are the only mammal with a preferential use of one hand. The part of the brain responsible for fine control and movement is located in the left cerebral hemisphere, and the findings above suggest that the human brain was already asymmetrical in its structure and function not long after 2 million years ago. Among Neanderthalers of 70,000 – 35,000 years ago, Marcellin Boule noted that the La Chapelle-aux-Saints individual had a left hemisphere slightly bigger than the right, and the same was found for brains of specimens from Neanderthal, Gibraltar, and La Quina.

 

 

Transition to Sound in Film

     The shift from silent to sound film at the end of the 1920’s marks, so far, the most important transformation in motion picture history. Despite all the highly visible technological developments in theatrical and home delivery of the moving image that have occurred over the decades since then, no single innovation has come close to being regarded as a similar kind of watershed. In nearly every language, however the words are phrased, the most basic division in cinema history lies between films that are mute and films that speak.

     Yet this most fundamental standard of historical periodization conceals a host of paradoxes. Nearly every movie theater, however modest, had a piano or organ to provide musical accompaniment to silent pictures. In many instances, spectators in the era before recorded sound experienced elaborate aural presentations alongside movies’ visual images, from the Japanese benshi (narrators) crafting multivoiced dialogue narratives to original musical compositions performed by symphony-size orchestras in Europe and the United States. In Berlin, for the premiere performance outside the Soviet Union of The Battleship Potemkin, film director Sergei Eisenstein worked with Austrian composer Edmund Meisei (1874 -1930) on a musical score matching sound to image; the Berlin screenings with live music helped to bring the film its wide international fame.

     Beyond that, the triumph of recorded sound has overshadowed the rich diversity of technological and aesthetic experiments with the visual image that were going forward simultaneously in the 1920’s. New color processes, larger or differently shaped screen sizes, multiple-screen projections, even television, were among the developments invented or tried out during the period, sometimes with starting success. The high costs of converting to sound and the early limitations of sound technology were among the factors that suppressed innovations or retarded advancement in these other areas. The introduction of new screen formats was put off for a quarter century, and color, though utilized over the next two decades for special productions, also did not become a norm until the 1950’s.

     Though it may be difficult to imagine from a later perspective, a strain of critical opinion in the 1920’s predicted that sound film would be a technical novelty that would soon fade from sight, just as had many previous attempts, dating well back before the First Would War, to link images with recorded sound. These critics were making a common assumption – that the technological inadequacies of earlier efforts (poor synchronization, weak sound amplification, fragile sound recordings) would invariably occur again. To be sure, their evaluation of the technical flaws in 1920’s sound experiments was not so far off the mark, yet they neglected to take into account important new forces in the motion picture field that, in a sense, would not take no for an answer.

     These forces were the rapidly expanded electronics and telecommunications companies that were developing and linking telephone and wireless technologies in the 1920’s. In the United States, they included such firms as American Telephone and Telegraph, General Electric, and Westinghouse. They were interested in all forms of sound technology and all potential revenues for commercial exploitation. Their competition and collaboration were creating the broadcasting industry in the United States, beginning with the introduction of commercial radio programming in the early 1920’s. ■ With financial assets considerably greater than those in the motion picture industry, and perhaps a wider vision of the relationships among entertainment and communications media, they revitalized research into recording sound for motion pictures.

     ■ In 1929 the United States motion picture industry released more than 300 sounded films – a rough figure, since a number were silent films with music tracks, or films prepared in dual versions, to take account of the many cinemas not yet wired for sound. ■ At the production level, in the United States the conversion was virtually complete by 1930. ■ In Europe it took a little longer, mainly because there were more small producers for whom the costs of sound were prohibitive, and in other parts of the world problems with rights or access to equipment delayed the shift to sound production for a few more years (though cinemas in major cities may have been wired in order to play foreign sound films). The triumph of sound cinema was swift, complete, and enormously popular.

 

Water in the Desert

     Rainfall is not completely absent in desert areas, but it is highly variable. An annual rainfall of four inches is often used to define the limits of a desert. The impact of rainfall upon the surface water and groundwater resources of the desert is greatly influenced by landforms. Flats and depressions where water can collect are common features, but they make up only a small part of the landscape.

     Arid lands, surprisingly, contain some of the world’s largest river systems, such as the Murray-Darling in Australia, the Rio Grande in North America, the Indus in Asia, and the Nile in Africa. These rivers and river systems are known as “exogenous” because their sources lie outside the arid zone. They are vital for sustaining life in some of the driest parts of the world. For centuries, the annual floods of the Nile, Tigris, and Euphrates, for example, have brought fertile silts and water to the inhabitants of their lower valleys. Today, river discharges are increasingly controlled by human intervention, creating a need for international river-basin agreements. The filling of the Ataturk and other dams in Turkey has drastically reduced flows in the Euphrates, with potentially serious consequences for Syria and Iraq.

     The flow of exogenous rivers varies with the season. The desert sections of long rivers respond several months after rain has fallen outside the desert, so that peak flows may be in the dry season. This is useful for irrigation, but the high temperatures, low humidities, and different day lengths of the dry season, compared to the normal growing season, can present difficulties with some crops.

     Regularly flowing rivers and streams that originate within arid lands are known as “endogenous.” These are generally fed by groundwater springs, and many issue from limestone massifs, such as the Atlas Mountains in Morocco. Basaltic rocks also support springs, notably at the Jabal Al-Arab on the Jordan-Syria border. ■ Endogenous rivers often do not reach the sea but drain into inland basins, where the water evaporates or is lost in the ground. ■ Most desert streambeds are normally dry, but they occasionally receive large flows of water and sediment. ■

     Deserts contain large amounts of groundwater when compared to the amounts

they hold in surface stores such as lakes and rivers. ■ But only a small fraction of groundwater enters the hydrological cycle – feeding the flows of streams, maintaining lake levels, and being recharged (or refilled) through surface flows and rainwater. In recent years, groundwater has become an increasingly important source of freshwater for desert dwellers. The United Nations Environment Programme and the World Bank have funded attempts to survey the groundwater resources of arid lands and to develop appropriate extraction techniques. Such programs are much needed because in many arid lands there is only a vague idea of the extent of groundwater resources. It is known, however, that the distribution of groundwater is uneven, and that much of it lies at great depths.

     Groundwater is stored in the pore spaces and joints of rocks and unconsolidated (unsolidified) sediments or in the openings widened through fractures and weathering. The water-saturated rock or sediment is known as an “aquifer.” Because they are porous, sedimentary rocks, such as sandstones and conglomerates, are important potential sources of groundwater. Large quantities of water may also be stored in limestones when joints and cracks have been enlarged to form cavities. Most limestone and sandstone aquifers are deep and extensive but may contain groundwaters that are not being recharged. Most shallow aquifers in sand and gravel deposits produce lower yields, but they can be rapidly recharged. Some deep aquifers are known as “fossil” waters. The term “fossil” describes water that has been present for several thousand years. These aquifers became saturated more than 10,000 years ago and are no longer being recharged.

     Water does not remain immobile in and aquifer but can seep out at springs or leak into other aquifers. The rate of movement may be very slow, in the Indus plain, the movement of saline (salty) groundsaters has still not reached equilibrium after 70 years of being tapped. The mineral content of groundwater normally increases with the depth, but even quite shallow aquifers can be highly saline.

TPO13

Types of Social Groups

Life places us in a complex web of relationships with other people. Our humanness arises out of these relationships in the course of social interaction. Moreover, our humanness must be sustained through social interaction – and fairly constantly so. When an association continues long enough for two people to become linked together by a relatively stable set of expectations, it is called a relationship.

     People are bound within relationships by two types of bonds: expressive ties and instrumental ties. Expressive ties are social links formed when we emotionally invest ourselves in and commit ourselves to other people. Through association with people who are meaningful to us, we achieve a sense of security, love, acceptance, companionship, and personal worth. Instrumental ties are social links formed when we cooperate with other people to achieve some goal. Occasionally, this may mean working with instead of against competitors More often, we simply cooperate with others to reach some end without endowing the relationship with any larger significance.

     Sociologists have built on the distinction between expressive and instrumental ties to distinguish between two types of groups: primary and secondary. A primary group involves two or more people who enjoy a direct, intimate, cohesive relationship with one another. Expressive ties predominate in primary groups; we view the people as ends in themselves and valuable in their own right. A secondary group entails two or more people who are involved in an impersonal relationship and have come together for a specific, practical purpose. Instrumental ties predominate in secondary groups; we perceive people as means to ends rather than as ends in their own right.

Sometimes primary group relationships evolve out of secondary group relationships. This happens in many work settings. People on the job often develop close relationships with coworkers as they come to share gripes, jokes, gossip, and satisfactions.

     A number of conditions enhance the likelihood that primary groups will arise. First, group size is important. We find it difficult to get to know people personally when they are milling about and dispersed in large groups. In small groups we have a better chance to initiate contact and establish rapport with them. Second, face-to-face contact allows us to size up others. Seeing and talking with one another in close physical proximity makes possible a subtle exchange of ideas and feelings. And third, the probability that we will develop primary group bonds increases as we have frequent and continuous contact. Our ties with people often deepen as we interact with them across time and gradually evolve interlocking habits and interests.

     Primary groups are fundamental to us and to society. First, primary groups are critical to the socialization process. Within them, infants and children are introduced to the ways of their society. Such groups are the breeding grounds in which we acquire the norms and values that equip us for social life. Sociologists view primary groups as bridges between individuals and the larger society because they transmit, mediate, and interpret a society’s cultural patterns and provide the sense of oneness so critical for social solidarity.

     Second, primary groups are fundamental because they provide the settings in which we meet most of our personal needs. Within them, we experience companionship, love, security, and an overall sense of well-being.  Not surprisingly, sociologists find that the strength of a group’s primary ties has implications for the group’s functioning.  For example, the stronger the primary group ties of a sports team playing together, the better their records is. 

     Third, primary groups are fundamental because they serve as powerful instruments for social control. Their members command and dispense many of the rewards that are so vital to us and that make our lives seem worthwhile. Should the use of rewards fail, members can frequently win by rejecting or threatening to ostracize those who deviate from the primary group’s norms. For instance, some social groups employ shunning (a person can remain in the community, but others are forbidden to interact with the person) as a device to bring into line individuals whose behavior goes beyond that allowed by the particular group. Even more important, primary groups define social reality for us by structuring our experiences. By providing us with definitions of situations, they elicit from us behavior that conforms to group-devised meanings. Primary groups, then, serve both as carriers of social norms and as enforcers of them.

Biological Clocks

     Survival and successful reproduction usually require the activities of animals to be coordinated with predictable events around them. Consequently, the timing and rhythms of biological functions must closely match periodic events like the solar day, the tides, the lunar cycle, and the seasons. The relations between animal activity and these periods, particularly for the daily rhythms, have been of such interest and importance that a huge amount of work has been done on them and the special research fields of chronobiology has emerged. Normally, the constantly changing levels of an animal’s activity – sleeping, feeding, moving, reproducing, metabolizing, and producing enzymes and hormones, for example – are well coordinated with environmental rhythms, but the key question is whether the animal’s schedule is driven by external cues, such as sunrise or sunset, or is instead dependent somehow on internal timers that themselves generate the observed biological rhythms. Almost universally, biologists accept the idea that all eukaryotes (a category that includes most organisms except bacteria and certain algae) have internal clocks. By isolating organisms completely from external periodic cues, biologists learned that organisms have internal clocks. For instance, apparently normal daily periods of biological activity were maintained for about a week by the fungus Neurospora when it was intentionally isolated from all geophysical timing cues while orbiting in a space shuttle. The continuation of biological rhythms in an organism without external cues attests to its having an internal clock.

     When crayfish are kept continuously in the dark, even for four to five months, their compound eyes continue to adjust on a daily schedule for daytime and nighttime vision. Horseshoe crabs kept in the dark continuously for a year were found to maintain a persistent rhythm of brain activity that similarly adapts their eyes on a daily schedule for bright or for weak light. Like almost all daily cycles of animals deprived of environmental cues, those measured for the horseshoe crabs in these conditions were not exactly 24 hours. Such a rhythm whose period is approximately – but not exactly – a day is called circadian. For different individual horseshoe crabs, the circadian period ranged from 22.2 to 25.5 hours. A particular animal typically maintains its own characteristic cycle duration with great precision for many days. Indeed, stability of the biological clock’s period is one of its major features, even when the organism’s environment is subjected to considerable changes in factors, such as temperature, that would be expected to affect biological activity strongly. Further evidence for persistent internal rhythms appears when the usual external cycles are shifted –either experimentally or by rapid east-west travel over great distances. Typically, the animal’s daily internally generated cycle of activity continues without change. As a result, its activities are shifted relative to the external cycle of the new environment. The disorienting effects of this mismatch between external time cues and internal schedules may persist, like our jet lag, for several days or weeks until certain cues such as the daylight/darkness cycle reset the organism’s clock to synchronize with the daily rhythm of the new environment.

     Animals need natural periodic signals like sunrise to maintain a cycle whose period is precisely 24 hours.   Such an external cue not only coordinates an animal’s daily rhythms with particular features of the local solar day but also – because it normally does so day after day – seems to keep the internal clock’s period close to that of Earth’s rotation.  Yet despite this synchronization of the period of the internal cycle, the animal’s timer itself continues to have its own genetically built-in period close to, but different from, 24 hours.  Without the external cue, the difference accumulates and so the internally regulated activities of the biological day drift continuously, like the tides, in relation to the solar day.   This drift has been studied extensively in many animals and in biological activities ranging from the hatching of fruit fly eggs to wheel running by squirrels. Light has a predominating influence in setting the clock. Even a fifteen-minute burst of light in otherwise sustained darkness can reset an animal’s circadian rhythm. Normally, internal rhythms are kept in step by regular environmental cycles. For instance, if a homing pigeon is to navigate with its Sun compass, its clock must be properly set by cues provided by the daylight/darkness cycle.

 

 

 

Methods of Studying Infant Perception

     In the study of perceptual abilities of infants, a number of techniques are used to determine infants’ responses to various stimuli. Because they cannot verbalize or fill out questionnaires, indirect techniques of naturalistic observation are used as the primary means of determining what infants can see, hear, feel, and so forth. Each of these methods compares an infant’s state prior to the introduction of a stimulus with its state during or immediately following the stimulus. The difference between the two measures provides the researcher with an indication of the level and duration of the response to the stimulus. For example, if a uniformly moving pattern of some sort is passed across the visual field of a neonate (newborn), repetitive following movements of the eye occur. The occurrence of these eye movements provides evidence that the moving pattern is perceived at some level by the newborn. Similarly, changes in the infant’s general level of motor activity – turning the head, blinking the eyes, crying, and so forth – have been used by researchers as visual indicators of the infant’s perceptual abilities.

     Such techniques, however, have limitations. First, the observation may be unreliable in that two or more observers may not agree that the particular response occurred, or to what degree it occurred. Second, responses are difficult to quantify. Often the rapid and diffuse movements of the infant make it difficult to get an accurate record of the number of responses. The third, and most potent, limitation is that it is not possible to be certain that the infant’s response was due to the stimulus presented or to a change from no stimulus to a stimulus. The infant may be responding to aspects of the stimulus different than those identified by the investigator. Therefore, when observational assessment is used as a technique for studying infant perceptual abilities, care must be taken not to overgeneralize from the data or to rely on one or two studies as conclusive evidence of a particular perceptual ability of the infant.

     Observational assessment techniques have become much more sophisticated reducing the limitations just presented. Film analysis of the infant’s responses, heart and respiration rate monitors, and nonnutritive sucking devices are used as effective tools in understanding infant perception.  Film analysis permits researchers to carefully study the infant’s responses over and over and in slow motion. Precise measurements can be made of the length and frequency of the infant’s attention between two stimuli.  Heart and respiration monitors provide the investigator with the number of heartbeats or breaths taken when a new stimulus is presented.  Numerical increases are used as quantifiable indicators of heightened interest in the new stimulus. Increases in nonnutritive sucking were first used as an assessment measure by researchers in 1969. They devised an apparatus that connected a baby’s pacifier to a counting device. As stimuli were presented, changes in the infant’s sucking behavior were recorded. Increases in the number of sucks were used as an indicator of the infant’s attention to or preference for a given visual display.

     Two additional techniques of studying infant perception have come into vogue. The first is the habituation-dishabituation technique, in which a single stimulus is presented repeatedly to the infant until there is a measurable decline (habituation) in whatever attending behavior is being observed. At that point a new stimulus is presented, and any recovery (dishabituation) in responsiveness is recorded. If the infant fails to dishabituate and continues to show habituation with the new stimulus, it is assumed that the baby is unable to perceive the new stimulus as different. The habituation-dishabituation paradigm has been used most extensively with studies of auditory and olfactory perception in infants. The second technique relies on evoked potentials, which are electrical brain responses that may be related to a particular stimulus because of where they originate. Changes in the electrical pattern of the brain indicate that the stimulus is getting through to the infant’s central nervous system and eliciting some form of response.

     Each of the preceding techniques provides the research with evidence that the infant can detect or discriminate between stimuli. With these sophisticated observational assessment and electro physiological measures, we know that the neonate of only a few days is far more perceptive than previously suspected. However, these measures are only “indirect” indicators of the infant’s perceptual abilities.

TPO14

Children and Advertising

Young children are trusting of commercial advertisements in the media, and advertisers have sometimes been accused of taking advantage of this trusting outlook. The Independent Television Commission, regulator of television advertising in the United Kingdom, has criticized advertisers for “misleadingness”—creating a wrong impression either intentionally or unintentionally—in an effort to control advertisers’ use of techniques that make it difficult for children to judge the true size, action, performance, or construction of a toy.

General concern about misleading tactics that advertisers employ is centered on the use of exaggeration. Consumer protection groups and parents believe that children are largely ill-equipped to recognize such techniques and that often exaggeration is used at the expense of product information. Claims such as “the best” or “better than” can be subjective and misleading; even adults may be unsure as to their meaning. They represent the advertiser’s opinions about the qualities of their products or brand and, as a consequence, are difficult to verify. Advertisers sometimes offset or counterbalance an exaggerated claim with a disclaimer—a qualification or condition on the claim. For example, the claim that breakfast cereal has a health benefit may be accompanied by the disclaimer “when part of a nutritionally balanced breakfast.” However, research has shown that children often have difficulty understanding disclaimers: children may interpret the phrase “when part of a nutritionally balanced breakfast” to mean that the cereal is required as a necessary part of a balanced breakfast. The author George Comstock suggested that less than a quarter of children between the ages of six and eight years old understood standard disclaimers used in many toy advertisements and that disclaimers are more readily comprehended when presented in both audio and visual formats. Nevertheless, disclaimers are mainly presented in audio format only.

Fantasy is one of the more common techniques in advertising that could possibly mislead a young audience. Child-oriented advertisements are more likely to include magic and fantasy than advertisements aimed at adults. In a content analysis of Canadian television, the author Stephen Kline observed that nearly all commercials for character toys featured fantasy play. Children have strong imaginations and the use of fantasy brings their ideas to life, but children may not be adept enough to realize that what they are viewing is unreal. Fantasy situations and settings are frequently used to attract children’s attention, particularly in food advertising. Advertisements for breakfast cereals have, for many years, been found to be especially fond of fantasy techniques, with almost nine out often including such content. Generally, there is uncertainty as to whether very young children can distinguish between fantasy and reality in advertising. Certainly, rational appeals in advertising aimed at children are limited, as most advertisements use emotional and indirect appeals to psychological states or associations.

The use of celebrities such as singers and movie stars is common in advertising. The intention is for the positively perceived attributes of the celebrity to be transferred to the advertised product and for the two to become automatically linked in the audience’s mind. In children’s advertising, the “celebrities” are often animated figures from popular cartoons. In the recent past, the role of celebrities in advertising to children has often been conflated with the concept of host selling. Host selling involves blending advertisements with regular programming in a way that makes it difficult to distinguish one from the other. Host selling occurs, for example, when a children’s show about a cartoon lion contains an ad in which the same lion promotes a breakfast cereal. The psychologist Dale Kunkel showed that the practice of host selling reduced children’s ability to distinguish between advertising and program material. It was also found that older children responded more positively to products in host selling advertisements.

Regarding the appearance of celebrities in advertisements that do not involve host selling, the evidence is mixed. Researcher Charles Atkin found that children believe that the characters used to advertise breakfast cereals are knowledgeable about cereals, and children accept such characters as credible sources of nutritional information. This finding was even more marked for heavy viewers of television. In addition, children feel validated in their choice of a product when a celebrity endorses that product. A study of children in Hong Kong, however, found that the presence of celebrities in advertisements could negatively affect the children’s perceptions of a product if the children did not like the celebrity in question.

Maya Water Problems

To understand the ancient Mayan people who lived in the area that is today southern Mexico and Central America and the ecological difficulties they faced, one must first consider their environment, which we think of as “jungle” or “tropical rainforest.” This view is inaccurate, and the reason proves to be important. Properly speaking, tropical rainforests grow in high-rainfall equatorial areas that remain wet or humid all year round. But the Maya homeland lies more than sixteen hundred kilometers from the equator, at latitudes 17 to 22 degrees north, in a habitat termed a “seasonal tropical forest.” That is, while there does tend to be a rainy season from May to October, there is also a dry season from January through April. If one focuses on the wet months, one calls the Maya homeland a “seasonal tropical forest”; if one focuses on the dry months, one could instead describe it as a “seasonal desert.” 

From north to south in the Yucatan Peninsula, where the Maya lived, rainfall ranges from 18 to 100 inches (457 to 2,540 millimeters) per year, and the soils become thicker, so that the southern peninsula was agriculturally more productive and supported denser populations. But rainfall in the Maya homeland is unpredictably variable between years; some recent years have had three or four times more rain than other years. As a result, modern farmers attempting to grow corn in the ancient Maya homelands have faced frequent crop failures, especially in the north. The ancient Maya were presumably more experienced and did better, but nevertheless they too must have faced risks of crop failures from droughts and hurricanes.

Although southern Maya areas received more rainfall than northern areas, problems of water were paradoxically more severe in the wet south. While that made things hard for ancient Maya living in the south, it has also made things hard for modern archaeologists who have difficulty understanding why ancient droughts caused bigger problems in the wet south than in the dry north. The likely explanation is that an area of underground freshwater underlies the Yucatan Peninsula, but surface elevation increases from north to south, so that as one moves south the land surface lies increasingly higher above the water table. In the northern peninsula the elevation is sufficiently low that the ancient Maya were able to reach the water table at deep sinkholes called cenotes, or at deep caves. In low-elevation north coastal areas without sinkholes, the Maya would have been able to get down to the water table by digging wells up to 75 feet (22 meters) deep. But much of the south lies too high above the water table for cenotes or wells to reach down to it. Making matters worse, most of the Yucatan Peninsula consists of karst, a porous sponge-like limestone terrain where rain runs straight into the ground and where little or no surface water remains available.

How did those dense southern Maya populations deal with the resulting water problem? It initially surprises us that many of their cities were not built next to the rivers but instead on high terrain in rolling uplands. The explanation is that the Maya excavated depressions, or modified natural depressions, and then plugged up leaks in the karst by plastering the bottoms of the depressions in order to create reservoirs, which collected rain from large plastered catchment basins and stored it for use in the dry season. For example, reservoirs at the Maya city of Tikal held enough water to meet the drinking water needs of about 10,000 people for a period of 18 months. At the city of Coba the Maya built dikes around a lake in order to raise its level and make their water supply more reliable. But the inhabitants of Tikal and other cities dependent on reservoirs for drinking water would still have been in deep trouble if 18 months passed without rain in a prolonged drought. A shorter drought in which they exhausted their stored food supplies might already have gotten them in deep trouble, because growing crops required rain rather than reservoirs. 

Pastoralism in Ancient Inner Eurasia

Pastoralism is a lifestyle in which economic activity is based primarily on livestock. Archaeological evidence suggests that by 3000 B.C., and perhaps even earlier, there had emerged on the steppes of Inner Eurasia the distinctive types of pastoralism that were to dominate the region’s history for several millennia. Here, the horse was already becoming the animal of prestige in many regions, though sheep, goats, and cattle could also play a vital role. It is the use of horses for transportation and warfare that explains why Inner Eurasian pastoralism proved the most mobile and the most militaristic of all major forms of pastoralism. The emergence and spread of pastoralism had a profound impact on the history of Inner Eurasia, and also, indirectly, on the parts of Asia and Europe just outside this area. In particular, pastoralism favors a mobile lifestyle, and this mobility helps to explain the impact of pastoralist societies on this part of the world.

The mobility of pastoralist societies reflects their dependence on animal-based foods. While agriculturalists rely on domesticated plants, pastoralists rely on domesticated animals. As a result, pastoralists, like carnivores in general, occupy a higher position on the food chain. All else being equal, this means they must exploit larger areas of land than do agriculturalists to secure the same amount of food, clothing, and other necessities. So pastoralism is a more extensive lifeway than farming is. However, the larger the terrain used to support a group, the harder it is to exploit that terrain while remaining in one place. So, basic ecological principles imply a strong tendency within pastoralist lifeways toward nomadism (a mobile lifestyle). As the archaeologist Roger Cribb puts it, “The greater the degree of pastoralism, the stronger the tendency toward nomadism.” A modern Turkic nomad interviewed by Cribb commented: “The more animals you have, the farther you have to move.”

Nomadism has further consequences. It means that pastoralist societies occupy and can influence very large territories. This is particularly true of the horse pastoralism that emerged in the Inner Eurasian steppes, for this was the most mobile of all major forms of pastoralism. So, it is no accident that with the appearance of pastoralist societies there appear large areas that share similar cultural, ecological, and even linguistic features. By the late fourth millennium B.C., there is already evidence of large culture zones reaching from Eastern Europe to the western borders of Mongolia. Perhaps the most striking sign of mobility is the fact that by the third millennium B.C., most pastoralists in this huge region spoke related languages ancestral to the modern Indo-European languages. The remarkable mobility and range of pastoral societies explain, in part, why so many linguists have argued that the Indo-European languages began their astonishing expansionist career not among farmers in Anatolia (present-day Turkey), but among early pastoralists from Inner Eurasia. Such theories imply that the Indo-European languages evolved not in Neolithic (10,000 to 3,000 B.C.) Anatolia, but among the foraging communities of the cultures in the region of the Don and Dnieper rivers, which took up stock breeding and began to exploit the neighboring steppes.

Nomadism also subjects pastoralist communities to strict rules of portability. If you are constantly on the move, you cannot afford to accumulate large material surpluses. Such rules limit variations in accumulated material goods between pastoralist households (though they may also encourage a taste for portable goods of high value such as silks or jewelry). So, by and large, nomadism implies a high degree of self-sufficiency and inhibits the appearance of an extensive division of labor. Inequalities of wealth and rank certainly exist, and have probably existed in most pastoralist societies, but except in periods of military conquest, they are normally too slight to generate the stable, hereditary hierarchies that are usually implied by the use of the term class.  Inequalities of gender have also existed in pastoralist societies, but they seem to have been softened by the absence of steep hierarchies of wealth in most communities, and also by the requirement that women acquire most of the skills of men, including, often, their military skills.

TPO15

Glacier Formation

     Glaciers are slowly moving masses of ice that have accumulated on land in areas where more snow falls during a year than melts. Snow falls as hexagonal crystals, but once on the ground, snow is soon transformed into a compacted mass of smaller, rounded grains.  As the air space around them is lessened by compaction and melting, the grains become denser.  With further melting, refreezing, and increased weight from newer snowfall above, the snow reaches a granular recrystalized stage intermediate between flakes and ice know as firm.  With additional time, pressure, and refrozen meltwater from above, the small firm granules become larger, interlocked crystals of blue glacial ice.  When the ice is thick enough, usually over 30 meters, the weight of the snow and firm will cause the ice crystals toward the bottom to become plastic and to flow outward or downward from the area of snow accumulation.

     Glaciers are open systems, with snow as the system’s input and meltwater as the system’s mail output. The glacial system is governed by two basic climatic variables: precipitation and temperature. For a glacier to grow or maintain its mass, there must be sufficient snowfall to match or exceed the annual loss through melting, evaporation, and calving, which occurs when the glacier loses solid chunks as icebergs to the sea or to large lakes. If summer temperatures are high for too long, then all the snowfall from the previous winter will melt. Surplus snowfall is essential for a glacier to develop. A surplus allows snow to accumulate and for the pressure of snow accumulated over the years to transform buried snow into glacial ice with a depth great enough for the ice to flow. Glaciers are sometimes classified by temperature as faster-flowing temperate glaciers or as slower-flowing polar glaciers.

     Glaciers are part of Earth’s hydrologic cycle and are second only to the oceans in the total amount of water contained. About 2 percent of Earth’s water is currently frozen as ice. Two percent may be a deceiving figure, however, since over 80 percent of the world’s freshwater is locked up as ice in glaciers, with the majority of it in Antarctica. The total amount of ice is even more awesome if we estimate the water released upon the hypothetical melting of the world’s glaciers. Sea level would rise about 60 meters. This would change the geography of the planet considerably. In contrast, should another ice age occur, sea level would drop drastically. During the last ice age, sea level dropped about 120 meters.

     When snow falls on high mountains or in polar regions, it may become part of the glacial system. Unlike rain, which returns rapidly to the sea or atmosphere, the snow that becomes part of a glacier is involved in a much more slowly cycling system. Here water may be stored in ice form for hundreds or even hundreds of thousands of years before being released again into the liquid water system as meltwater. In the meantime, however, this ice is not static. Glaciers move slowly across the land with tremendous energy, carving into even the hardest rock formations and thereby reshaping the landscape as they engulf, push, drag, and finally deposit rock debris in places far from its original location. As a result, glaciers create a great variety of landforms that remain long after the surface is released from its icy covering.

     Throughout most of Earth’s history, glaciers did not exist, but at the present time about 10 percent of Earth’s land surface is covered by glaciers. Present-day glaciers are found in Antarctica, in Greenland, and at high elevations on all the continent except Australia. In the recent past, from about 2.4 million to about 10,000 years ago, nearly a third of Earth’s land area was periodically covered by ice thousands of meters thick. In the much more distant past, other ice ages have occurred.

A Warm-Blooded Turtle

     When it comes to physiology, the leatherback turtle is, in some ways, more like a reptilian whale than a turtle. It swims father into the cold of the northern and southern oceans than any other sea turtle, and it deals with the chilly waters in a way unique among reptiles.

     A warm-blooded turtle may seem to be a contradiction in terms. Nonetheless, an adult leatherback can maintain a body temperature of between 25 and 26° C (77-79°F) in seawater that is only 8° C (46.4°). Accomplishing this feat requires adaptations both to generate heat in the turtle’s body and to keep it from escaping into the surrounding waters. Leatherbacks apparently do not generate internal heat the way we do, or the way birds do, as a by-product of cellular metabolism. A leatherback may be able to pick up some body heat by basking at the surface; its dark, almost black body color may help it to absorb solar radiation. However, most of its internal heat comes from the action of its muscles.

     Leatherbacks keep their body heat in three different ways. The first, and simplest, is size. The bigger the animal is, the lower its surface-to-volume ratio; for every ounce of body mass, there is proportionately less surface through which heat can escape. An adult leatherback is twice the size of the biggest cheloniid sea turtles and will therefore take longer to cool off. Maintaining a high body temperature through sheer bulk is called gigantothermy.  It works for elephants, for whales, and, perhaps, it worked for many of the larger dinosaurs.  It apparently works, in a smaller way, for some other sea turtles.  Large loggerhead and green turtles can maintain their body temperature at a degree or two above that of the surrounding water, and gigantothermy is probably the way they do it. Muscular activity helps, too, and an actively swimming green turtle may be 7° C (12.6° F) warmer than the waters it swims through.

     Gigantothermy, though, would not be enough to keep a leatherback warm in cold northern waters. It is not enough for whales, which supplement it with a thick layer of insulating blubber (fat). Leatherbacks do not have blubber, but they do have a reptilian equivalent: thick, oil-saturated skin, with a layer of fibrous, fatty tissue just beneath it. Insulation protects the leatherback everywhere but on its head and flippers. Because the flippers are comparatively thin and blade like, they are the one part of the leatherback that is likely to become chilled. There is not much that the turtle can do about this without compromising the aerodynamic shape of the flipper. The problem is that as blood flows through the turtle’s flippers, it risks losing enough heat to lower the animal’s central body temperature when it returns. The solution is to allow the flippers to cool down without drawing heat away from the rest of the turtle’s body. The leatherback accomplishes this by arranging the blood vessels in the base of its flipper into a countercurrent exchange system.

     In a countercurrent exchange system, the blood vessels carrying cooled blood from the flippers run close enough to the blood vessels carrying warm blood from the body to pick up some heat from the warmer blood vessels; thus, the heat is transferred from the outgoing to the ingoing vessels before it reaches the flipper itself. This is the same arrangement found in an old-fashioned steam radiator, in which the coiled pipes pass heat back and forth as water courses through them. The leatherback is certainly not the only animal with such an arrangement; gulls have a countercurrent exchange in their legs. That is why a gull can stand on an ice floe without freezing.

     All this applies, of course, only to an adult leatherback. Hatchlings are simply too small to conserve body heat, even with insulation and countercurrent exchange systems. We do not know how old, or how large, a leatherback has to be before it can switch from a cold-blooded to a warm-blooded mode of life. Leatherbacks reach their immense size in a much shorter time than it takes other sea turtles to grow. Perhaps their rush to adulthood is driven by a simple need to keep warm.

Mass Extinction

     Cases in which many species become extinct within a geologically short interval of time are called mass extinctions.  There was one such event at the end of the Cretaceous period (around 70 million years ago).  There was another, even larger, mass extinction at the end of the Permian period (around 250 million years ago).  The Permian event has attracted much less attention than other mass extinctions because mostly unfamiliar species perished at that time.

     The fossil record shows at least five mass extinctions in which many families of marine organisms died out. The rates of extinction happening today are as great as the rates during these mass extinctions. Many scientists have therefore concluded that a sixth great mass extinction is currently in progress.

     What could cause such high rates of extinction? There are several hypotheses, including warming or cooling of Earth, changes in seasonal fluctuations or ocean currents, and changing positions of the continents. Biological hypotheses include ecological changes brought about by the evolution of cooperation between insects and flowering plants or of bottom-feeding predators in the oceans. Some of the proposed mechanisms required a very brief period during which all extinctions suddenly took place; other mechanisms would be more likely to have taken place more gradually, over an extended period, or at different times on different continents. Some hypotheses fail to account for simultaneous extinctions on land and in the seas. Each mass extinction may have had a different cause. Evidence points to hunting by humans and habitat destruction as the likely causes for the current mass extinction.

     American paleontologist David Raup and John Sepkoski, who have studied extinction rates in a number of fossil groups, suggest that episodes of increased extinction have recurred periodically, approximately every 26 million years since the mid-Cretaceous period. The late Cretaceous extinction of the dinosaurs and ammonoids was just one of the more drastic in a whole series of such recurrent extinction episodes. The possibility that mass extinctions may recur periodically has given rise to such hypotheses as that of a companion star with a long-period orbit deflecting other bodies from their normal orbits, making some of them fall to Earth as meteors and causing widespread devastation upon impact.

     Of the various hypotheses attempting to account for the late Cretaceous extinctions, the one that has attracted the most attention in recent years is the asteroid-impact hypotheses first suggested by Luis and Walter Alvarez. According to this hypothesis, Earth collided with an asteroid with an estimated diameter of 10 kilometers, or with several asteroids, the combined mass of which was comparable. The force of collision spewed large amounts of debris into the atmosphere, darkening the skies for several years before the finer particles settled. The reduced level of photosynthesis led to a massive decline in plant life of all kinds, and this caused massive starvation first to herbivorous and subsequently to carnivores. The mass extinction would have occurred very suddenly under this hypothesis.

     One interesting test of the Alvarez hypothesis is based on the presence of the rare-earth element iridium (Ir). Earth’s crust contains very little of this element, but most asteroids contain a lot more. Debris thrown into the atmosphere by an asteroid collision would presumably contain large amounts of iridium, and atmospheric currents would carry this material all over the globe. A search of sedimentary deposits that span the boundary between the Cretaceous and Tertiary periods shows that there is a dramatic increase in the abundance of iridium briefly and precisely at this boundary. This iridium anomaly offers strong support for the Alvarez hypothesis even though no asteroid itself has ever been recovered.

     An asteroid of this size would be expected to leave an immense crater, even if the asteroid itself was disintegrated by the impact. The intense heat of the impact would produce heat-shocked quartz in many types of rock. Also, large blocks thrown aside by the impact would form secondary craters surrounding the main crater. To date, several such secondary craters have been found along Mexico’s Yucatan Peninsula, and heat-shocked quartz has been found both in Mexico and in Haiti. A location called Chicxulub, along the Yucatan coast has been suggested as the primary impact site.

TPO16

Trade and the Ancient Middle East

Trade was the mainstay of the urban economy in the Middle East, as caravans negotiated the surrounding desert, restricted only by access to water and by mountain ranges. This has been so since ancient times, partly due to the geology of the area, which is mostly limestone and sandstone, with few deposits of metallic ore and other useful materials. Ancient demands for obsidian (a black volcanic rock useful for making mirrors and tools) led to trade with Armenia to the north, while jade for cutting tools was brought from Turkistan, and the precious stone lapis lazuli was imported from Afghanistan. One can trace such expeditions back to ancient Sumeria, the earliest known Middle Eastern civilization. Records show merchant caravans and trading post set up by the Sumerians in the surrounding mountains and deserts of Persia and Arabia, where they traded grain for raw materials, such as timber and stones, as well as for metals and gems.

Reliance on trade had several important consequences.  Production was generally in the hands of skilled individual artisans doing piecework under the tutelage of a master who was also the shop owner.  In these shops differences of rank were blurred as artisans and masters labored side by side in the same modest establishment, were usually members of the same guild and religious sect, lived in the same neighborhoods, and often had assumed (or real) kinship relationships.  The worker was bound to the master by a mutual contract that either one could repudiate, and the relationship was conceptualized as one of partnership. 

This mode of craft production favored the growth of self-governing and ideologically egalitarian craft guilds everywhere in the Middle Eastern city. These were essentially professional associations that provided for the mutual aid and protection of their members, and allowed for the maintenance of professional standards. The growth of independent guilds was furthered by the fact that surplus was not a result of domestic craft production but resulted primarily from international trading; the government left working people to govern themselves, much as shepherds of tribal confederacies were left alone by their leaders. In the multiplicity of small-scale local egalitarian or quasi-egalitarian organizations for fellowship, worship, and production that flourished in this laissez-faire environment, individuals could interact with one another within a community of harmony and ideological equality, following their own popularly elected leaders and governing themselves by shared consensus while minimizing distinctions of wealth and power.

The mercantile economy was also characterized by a peculiar moral stance that is typical of people who live by trade – an attitude that is individualistic, calculating, risk taking, and adaptive to circumstances. As among tribespeople, personal relationships and a careful weighing of character have always been crucial in a mercantile economy with little regulation, where one’s word is one’s bond and where informal ties of trust cement together an international trade network. Nor have merchants and artisans ever had much tolerance for aristocratic professions of moral superiority, favoring instead an egalitarian ethic of the open market, where steady hard work, the loyalty of one’s fellows, and entrepreneurial skill make all the difference. And, like the pastoralists, Middle Eastern merchants and artisans unhappy with their environment could simply pack up and leave for greener pastures – an act of self-assertion wholly impossible in most other civilizations throughout history.

Dependence on long-distance trade also meant that the great empires of the Middle East were built both literally and figuratively on shifting sand. The central state, though often very rich and very populous, was intrinsically fragile, since the development of new international trade routes could undermine the monetary base and erode state power, as occurred when European seafarers circumvented Middle Eastern merchants after Vasco da Gama’s voyage around Africa in the late fifteenth-century opened up a southern route. The ecology of the region also permitted armed predators to prowl the surrounding barrens, which were almost impossible for a state to control. Peripheral peoples therefore had a great advantage in their dealings with the center, making government authority insecure and anxious.

Development of the Periodic Table

The periodic table is a chart that reflects the periodic recurrent of chemical and physical properties of the elements when the elements are arranged in order of increasing atomic number (the number of protons in the nucleus). It is a monumental scientific achievement, and its development illustrates the essential interplay between observation, prediction, and testing required for scientific progress. In the 1800’s scientists were searching for new elements. By the late 1860’s more than 60 chemical elements had been identified, and much was known about their descriptive chemistry. Various proposals were put forth to arrange the elements into groups based on similarities in chemical and physical properties. The next step was to recognize a connection between group properties (physical or chemical similarities) and atomic mass (the measured mass of an individual atom of an element). When the elements known at the time were ordered by increasing atomic mass, it was found that successive elements belonged to different chemical groups and that the order of the groups in this sequence was fixed and repeated itself at regular intervals.  Thus when the series of elements was written so as to begin a new horizontal row with each alkali metal, elements of the same groups were automatically assembled in vertical columns in a periodic table of the elements.  This table was the forerunner of the modern table.

When the German chemist Lothar Meyer and (independently) the Russian Dmitry Mendeleyev first introduced the periodic table in 1869-70, one-third of the naturally occurring chemical elements had not yet been discovered. Yet both chemists were sufficiently farsighted to leave gaps where their analyses of periodic physical and chemical properties indicated that new elements should be located. Mendeleyev was bolder than Meyer and even assumed that if a measured atomic mass put an element in the wrong place in the table, the atomic mass was wrong. In some cases this was true. Indium, for example, had previously been assigned an atomic mass between those of arsenic and selenium. Because there is no space in the periodic table between these two elements, Mendeleyev suggested that the atomic mass of indium be changed to a completely different value, where it would fill an empty space between cadmium and tin. In fact, subsequent work has shown that in a periodic table, elements should not be ordered strictly by atomic mass. For example, tellurium comes before iodine in the periodic table, even though its atomic mass is slightly greater. Such anomalies are due to the relative abundance of the “isotopes” or varieties of each element. All the isotopes of a given element have the same number of protons, but differ in their number of neutrons, and hence in their atomic mass. The isotopes of a given element have the same chemical properties but slightly different physical properties. We not know that atomic number (the number of protons in the nucleus), not atomic mass number (the number of protons and neutrons), determines chemical behavior.

Mendeleyev went further than Meyer in another respect: he predicted the properties of six elements yet to be discovered. For example, a gap just below aluminum suggested a new element would be found with properties analogous to those of aluminum. Mendeleyev designated this element “eka-aluminum” (eka is the Sanskrit word for “next”) and predicted its properties. Just five years later and element with the proper atomic mass was isolated and named gallium by its discoverer. The close correspondence between the observed properties of gallium and Mendeleyev’s predictions for eka-aluminum lent strong support to the periodic law. Additional support came in 1885 when eka-silicon, which had also been described in advance by Mendeleyev, was discovered and named germanium.

The structure of the periodic table appeared to limit the number of possible elements. It was therefore quite surprising when John William Strutt, Lord Rayleigh, discovered a gaseous element in 1894 that did not fit into the previous classification scheme. A century earlier, Henry Cavendish had noted the existence of a residual gas when oxygen and nitrogen are removed from air, but its importance had not been realized. Together with William Ramsay, Rayleigh isolated the gas (separating it from other substances into its pure state) and named it argon. Ramsay then studied a gas that was present in natural gas deposits and discovered that it was helium, an element whose presence in the Sun had been noted earlier in the spectrum of sunlight but that had not previously been known on Earth. Rayleigh and Ramsay postulated the existence of a new group of elements, and in 1898 other members of the series (neon, krypton, and xenon) were isolated.

Planets in Our Solar System

The Sun is the hub of a huge rotating system consisting of nine planets, their satellites, and numerous small bodies, including asteroids, comets, and meteoroids. An estimated 99.85 percent of the mass of our solar system is contained within the Sun, while the planets collectively make up most of the remaining 0.15 percent.

The planets, in order of their distance from the Sun, are Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto. Under the control of the Sun’s gravitational force, each planet maintains an elliptical orbit and all of them travel in the same direction.

The planets in our solar system fall into two groups: the terrestrial (Earth-like) planets (Mercury, Venus, Earth, and Mars) and the Jovian (Jupiter-like) planets (Jupiter, Saturn, Uranus, and Neptune). Pluto is not included in either category, because its great distance from Earth and its small size make this planet’s true nature a mystery.

The most obvious difference between the terrestrial and the Jovian planets is their size. The largest terrestrial planet, Earth has a diameter of only one quarter as great as the diameter of the smallest Jovian planet, Neptune, and its mass is only one seventeenth as great. Hence, the Jovian planets are often called giants. Also, because of their relative locations, the four Jovian planets are known as the outer planets, while the terrestrial planets are known as the inner planets. There appears to be a correlation between the positions of these planets and their sizes.

Other dimensions along which the two groups differ markedly are density and composition. The densities of the terrestrial planets average about 5 times the density of water, whereas the Jovian planets have densities that average on 1.5 times the density of water. One of the outer planets, Saturn, has a density of only 0.7 that of water, which means that Saturn would float in water. Variations in the composition of the planets are largely responsible for the density differences.  The substances that make up both groups of planets are divided into three groups – gases, rocks, and ices – based on their melting points.  The terrestrial planets are mostly rocks: dense rocky and metallic material, with minor amounts of gases.  The Jovian planets, on the other hand, contain a large percentage of the gases hydrogen and helium, with varying amounts of ices: mostly water, ammonia, and methane ices. 

The Jovian planets have very thick atmospheres consisting of varying amounts of hydrogen, helium, methane, and ammonia. By comparison, the terrestrial planets have meager atmospheres at best. A planet’s ability to retain an atmosphere depends on its temperature and mass. Simply stated, a gas molecule can “evaporate” from a planet if it reaches a speed known as the escape velocity. For Earth, this velocity is 11 kilometers per second. Any material, including a rocket, must reach this speed before it can leave Earth and go into space. The Jovian planets, because of their greater masses and thus higher surface gravities, have higher escape velocities (21-60 kilometers per second) than the terrestrial planets. Consequently, it is more difficult for gases to “evaporate” from them. Also, because the molecular motion of a gas depends on temperature, at the low temperatures of the Jovian planets even the lightest gases are unlikely to acquire the speed needed to escape. On the other hand, a comparatively warm body with a small surface gravity, like Earth’s moon, is unable to hold even the heaviest gases and thus lacks an atmosphere. The slightly larger terrestrial planets Earth, Venus, and Mars retain some heavy gases like carbon dioxide, but even their atmospheres make up only an infinitesimally small portion of their total mass.

The orderly nature of our solar system leads most astronomers to conclude that the planets formed at essentially the same time and from the same material as the Sun. It is hypothesized that the primordial cloud of dust and gas from which all the planets are thought to have condensed had a composition somewhat similar to that of Jupiter. However, unlike Jupiter, the terrestrial planets today are nearly void of light gases and ices. The explanation may be that the terrestrial planets were once much larger and richer in these materials but eventually lost them because of these bodies’ relative closeness to the Sun, which meant that their temperatures were relatively high.

TPO17

Europe’s Early Sea Trade with Asia

In the fourteenth century, a number of political developments cut Europe’s overland trade routes to southern and eastern Asia, with which Europe had had important and highly profitable commercial ties since the twelfth century. This development, coming as it did when the bottom had fallen out of the European economy, provided an impetus to a long-held desire to secure direct relations with the East by establishing a sea trade. Widely reported, if somewhat distrusted, accounts by figures like the famous traveler from Venice. Marco Polo, of the willingness of people in China to trade with European and of the immensity of the wealth to be gained by such contact made the idea irresistible. Possibilities for trade seemed promising, but no hope existed for maintaining the traditional routes over land. A new way had to be found.

The chief problem was technological: How were the Europeans to reach the East? Europe’s maritime tradition had developed in the context of easily navigable seas – the Mediterranean, the Baltic, and to a lesser extent, the North Sea between England and the Continent – not of vast oceans. New types of ships were needed, new methods of finding one’s way, new techniques for financing so vast a scheme. The sheer scale of the investment it took to begin commercial expansion at sea reflects the immensity of the profits that such East-West trade could create. Spices were the most sought-after commodities. Spices not only dramatically improved the taste of the European diet but also were used to manufacture perfumes and certain medicines. But even high-priced commodities like spices had to be transported in large bulk in order to justify the expense and trouble of sailing around the African continent all the way to India and China. 

The principal seagoing ship used throughout the Middle Ages was the galley, a long, low ship fitted with sails but driven primarily by oars. The largest galleys had as many as 50 oarsmen. Since they had relatively shallow hulls, they were unstable when driven by sail or when on rough water: hence they were unsuitable for the voyage to the East. Even if they hugged the African coastline, they had little chance of surviving a crossing of the Indian Ocean. Shortly after 1400, shipbuilders began developing a new type of vessel properly designed to operate in rough, open water: the caravel. It had a wider and deeper hull than the galley and hence could carry more cargo: increased stability made it possible to add multiple masts and sails. In the largest caravels, two main masts held large square sails that provided the bulk of the thrust driving the ship forward, while a smaller forward mast held a triangular-shaped sails, called a lateen sail, which could be moved into a variety of positions to maneuver the ship.

The astrolabe had long been the primary instrument for navigation, having been introduced in the eleventh century. It operated by measuring the height of the Sun and the fixed stars: by calculating the angles created by these points. It determined the degrees of latitude at which one stood (The problem of determining longitude, though, was not solved until the eighteen century.) By the early thirteenth century, western Europeans had also developed and put into use the magnetic compass, which helped when clouds obliterated both the Sun and the stars. Also beginning in the thirteen century, there were new maps refined by precise calculations and the reports of sailors that made it possible to trace one’s path with reasonable accuracy. Certain institutional and practical norms had become established as well. A maritime code known as the Consulate of the Sea, which originated in the western Mediterranean region in the fourteenth century, won acceptance by a majority of sea goers as the normative code for maritime conduct. It defined such matters as the authority of a ship’s officers, protocols of command, pay structures, the rights of sailors, and the rules of engagement when ships met one another on the sea-lanes. Thus by about 1400 the key elements were in place to enable Europe to begin its seaward adventure.

 

 

 

Animal Signals in the Rain Forest

The daytime quality of light in forests varies with the density of the vegetation, the angle of the Sun, and the amount of cloud in the sky. Both animals and plants have different appearances in these various lighting conditions. A color or pattern that is relatively indistinct in one kind of light may be quite conspicuous in another.

In the varied and constantly changing light environment of the forest, an animal must be able to send visual signals to members of its own species and at the same time avoid being detected by predators. An animal can hide from predators by choosing the light environment in which its pattern is least visible. This may require moving to different parts of the forest at different times of the day or under different weather conditions, or it may be achieved by changing color according to the changing light conditions. Many species of amphibians (frogs and toads) and reptiles (lizards and snakes) are able to change their color patterns to camouflage themselves. Some also signal by changing color. The chameleon lizard has the most striking ability to do this. Some chameleon species can change from a rather dull appearance to a full riot of carnival colors in seconds. By this means, they signal their level of aggression or readiness to mate.

Other species take into account the changing conditions of light by performing their visual displays only when the light is favorable. A male bird of paradise may put himself in the limelight by displaying his spectacular plumage in the best stage setting to attract a female. Certain butterflies move into spots of sunlight that have penetrated to the forest floor and display by opening and closing their beautifully patterned wings in the bright spotlights. They also compete with each other for the best spot of sunlight.

Very little light filters through the canopy of leaves and branches in a rain forest to reach ground level – or close to the ground – and at those levels the yellow-to-green wavelengths predominate. A signal might be most easily seen if it is maximally bright. In the green-to-yellow lighting conditions of the lowest levels of the forest, yellow and green would be the brightest colors, but when an animal is signaling, these colors would not be very visible if the animal was sitting in an area with a yellowish or greenish background. The best signal depends not only on its brightness but also on how well it contrasts with the background against which it must be seen. In this part of the fain forest, therefore, red and orange are the best colors for signaling, and they are the colors used in signals by the ground-walking Australian brush turkey. This species, which lives in the rain forests and scrublands of the east coast of Australia, has a brown-to-black plumage with bare, bright-red skin on the head and neck and a neck collar of orange-yellow loosely hanging skin. During courtship and aggressive displays, the turkey enlarges its colored neck collar by inflating sacs in the neck region and then flings about a pendulous part of the colored signaling apparatus as it utters calls designed to attract or repel. This impressive display is clearly visible in the light spectrum illuminating the forest floor.

Less colorful birds and animals that inhabit the rain forest tend to rely on forms of signaling other than the visual, particularly over long distances. The piercing cries of the rhinoceros hornbill characterize the Southeast Asian rain forest, as do the unmistakable calls of the gibbons.  In densely wooded environments, sound is the best means of communication over distance because in comparison with light, it travels with little impediment from trees and other vegetation.  In forests, visual signals can be seen only at short distances, where they are not obstructed by tress.  The male riflebird exploits both of these modes of signaling simultaneously in his courtship display. The sounds made as each wing is opened carry extremely well over distance and advertise his presence widely. The ritualized visual display communicates in close quarters when a female has approached.

Symbiotic Relationships

A symbiotic relationship is an interaction between two or more species in which one species lives in or on another species. There are three main types of symbiotic relationships: parasitism, commensalisms, and mutualism. The first and the third can be key factors in the structure of a biological community, that is, all the populations of organisms living together and potentially interacting in a particular area.

Parasitism is a kind of predator-prey relationship in which one organism the parasite, derives its food at the expense of its symbiotic associate, the host. Parasites are usually smaller than their hosts. An example of a parasite is a tapeworm that lives inside the intestines of a larger animal and absorbs nutrients from its host. Natural selection favors the parasites that are best able to find and feed on hosts. At the same time, defensive abilities of hosts are also selected for. As an example, plants make chemicals toxic to fungal and bacterial parasites, along with ones toxic to predatory animals (sometimes they are the same chemicals). In vertebrates, the immune system provides a multiple defense against internal parasites.

At times, it is actually possible to watch the effects of natural selection in host-parasite relationships. For example, Australia during the 1940s was overrun by hundreds of millions of European rabbits.  The rabbits destroyed huge expanses of Australia and threatened the sheep and cattle industries.  In 1950, myxoma virus, a parasite that affects rabbits, was deliberately introduced into Australia to control the rabbit population.  Spread rapidly by mosquitoes, the virus devastated the rabbit population.  The virus was less deadly to the offspring of surviving rabbits, however, and it caused less and less harm over the years. Apparently, genotypes (the genetic make-up of an organism) in the rabbit population were selected that were better able to resist the parasite. Meanwhile, the deadliest strains of the virus perished with their hosts as natural selection favored strains that could infect hosts but not kill them. Thus, natural selection stabilized this host-parasite relationship.

In contrast to parasitism, in commensalisms, one partner benefits without significantly affecting the other. Few cases of absolute commensalisms probably exist, because it is unlikely that one of the partners will be completely unaffected. Commensal associations sometimes involve one species’ obtaining food that is inadvertently exposed by another. For instance, several kinds of birds feed on insects flushed out of the grass by grazing cattle. It is difficult to imagine how this could affect the cattle, but the relationship may help or hinder them in some way not yet recognized.

The third type of symbiosis, mutualism, benefits both partners in the relationship. Legume plants and their nitrogen-fixing bacteria, and the interactions between flowering plants and their pollinators, are examples of mutualistic association. In the first case, the plants provide the bacteria with carbohydrates and other organic compounds, and the bacteria have enzymes that act as catalysts that eventually add nitrogen to the soil, enriching it. In the second case, pollinators (insects, birds) obtain food from the flowering plant, and the plant has its pollen distributed and seeds dispersed much more efficiently than they would be if they were carried by the wind only. Another example of mutualism would be the bull’s horn acacia tree, which grows in Central and South America. The tree provides a place to live for ants of the genus Pseudomyrmex. The ants live in large, hollow thorns and eat sugar secreted by the tree. The ants also eat yellow structures at the tip of leaflets: these are protein rich and seem to have no function for the tree except to attract ants. The ants benefit the host tree by attacking virtually anything that touches it. They sting other insects and large herbivores (animals that eat only plants) and even clip surrounding vegetation that grows near the tree. When the ants are removed, the trees usually die, probably because herbivores damage them so much that they are unable to compete with surrounding vegetation for light and growing space.

The complex interplay of species in symbiotic relationships highlights an important point about communities. Their structure depends on a web of diverse connections among organism.

TPO18

Industrialization in the Netherlands and Scandinavia

     While some European countries, such as England and Germany, began to industrialize in the eighteenth century, the Netherlands and the Scandinavian countries of Denmark, Norway, and Sweden developed later.  All four of these countries lagged considerably behind in the early nineteenth century.  However, they industrialized rapidly in the second half of the century, especially in the last two or three decades.  In view of their later start and their lack of coal – undoubtedly the main reason they were not among the early industrializers – it is important to understand the sources of their success. 

     All had small populations. At the beginning of the nineteenth century, Denmark and Norway had fewer than 1 million people, while Sweden and the Netherlands had fewer than 2.5 million inhabitants. All exhibited moderate growth rates in the course of the century (Denmark the highest and Sweden the lowest), but all more than doubled in population by 1900. Density varied greatly. The Netherlands had one of the highest population densities in Europe, whereas Norway and Sweden had the lowest. Denmark was in between but closer to the Netherlands.

     Considering human capital as a characteristic of the population, however, all four countries were advantaged by the large percentages of their populations who could read the write. In both 1850 and 1914, the Scandinavian countries had the highest literacy rates in Europe, or in the world, and the Netherlands was well above the European average. This fact was of enormous value in helping the national economies find their niches in the evolving currents of the international economy.  

  

Location was an important factor for all four countries. All had immediate access to the sea, and this had important implications for a significant international resource, fish, as well as for cheap transport, merchant marines, and the shipbuilding industry. Each took advantage of these opportunities in its own way. The people of the Netherlands, with a long tradition of fisheries and mercantile shipping, had difficulty in developing good harbors suitable for steamships; eventually they did so at Rotterdam and Amsterdam, with exceptional results for transit trade with Germany and central Europe and for the processing of overseas foodstuffs and raw materials (sugar, tobacco, chocolate, grain, and eventually oil). Denmark also had an admirable commercial history, particularly with respect to traffic through the Sound (the strait separating Denmark and Sweden). In 1857, in return for a payment of 63 million kronor from other commercial nations, Denmark abolished the Sound toll dues, the fees it had collected since 1497 for the use of the Sound. This, along with other policy shifts toward free trade, resulted in a significant increase in traffic through the Sound and in the port of Copenhagen.

     The political institutions of the four countries posed no significant barriers to industrialization or economic growth. The nineteenth century passed relatively peacefully for these countries, with progressive democratization taking place in all of them. They were reasonably well governed, without notable corruption or grandiose state projects, although in all of them the government gave some aid to railways, and in Sweden the state built the main lines. As small countries dependent on foreign markets, they followed a liberal trade policy in the main, though a protectionist movement developed in Sweden. In Denmark and Sweden agricultural reforms took place gradually from the late eighteenth century through the first half of the nineteenth, resulting in a new class of peasant landowners with  a definite market orientation.

     The key factor in the success of these countries (along with high literacy, which contributed to it) was their ability to adapt to the international division of labor determined by the early industrializers and to stake out areas of specialization in international markets for which they were especially well suited. This meant a great dependence on international commerce, which had notorious fluctuations, but it also meant high returns to those factors of production that were fortunate enough to be well placed in times of prosperity. In Sweden exports accounted for 18 percent of the national income in 1870, and in 1913, 22 percent of a much larger national income. In the early twentieth century, Denmark exported 63 percent of its agricultural production: butter, port products, and eggs. It exported 80 percent of its butter, almost all to Great Britain, where it accounted for 40 percent of British butter imports.

The Mystery of Yawning

     According to conventional theory, yawning takes place when people are bored or sleepy and serves the function of increasing alertness by reversing, through deeper breathing, the drop in blood oxygen levels that are caused by the shallow breathing that accompanies lack of sleep or boredom. Unfortunately, the few scientific investigations of yawning have failed to find any connection between how often someone yawns and how much sleep they have had or how tired they are. About the closest any research has come to supporting the tiredness theory is to confirm just a developmental fossil with no that adults yawn more often on weekdays than at weekends, and that school children yawn more frequently in their first year at primary school than they do in kindergarten.

     Another flaw of the tiredness theory is that yawning does not raise alertness or physiological activity, as the theory would predict. When researchers measured then heart rate, muscle tension and skin conductance of people before, during, and after yawning, they did detect some changes in skin conductance following yawning, indicating a slight increase in physiological activity. However, similar changes occurred when the subjects were asked simply to open their mouths or to breathe deeply. Yawning did nothing special to their state of physiological activity. Experiments have also cast serious doubt on the belief that yawning is triggered by a drop in blood oxygen or a rise in blood carbon dioxide.  Volunteers were told to think about yawning while they breathed either normal air, pure oxygen, or an air mixture with an above-normal level of carbon dioxide.  If the theory was correct, breathing air with extra carbon dioxide should have triggered yawing, while breathing pure oxygen should have suppressed yawning.  In fact, neither condition made any difference to the frequency of yawning, which remained constant at about 24 yawns per hour.  Another experiment demonstrated that physical exercise, which was sufficiently vigorous to double the rate of breathing, had no effect on the frequency of yawning. Again, the implication is that yawning has little of nothing to do with oxygen.

     A completely different theory holds that yawning assists in the physical development of the lungs early in life, but has no remaining biological function in adults. It has been suggested that yawning and hiccupping might serve to clear out the fetus’s airways. The lungs of a fetus secrete a liquid that mixes with its mother’s amniotic fluid. Babies with congenital blockages that prevent this fluid from escaping from their lungs are sometimes born with deformed lungs. It might be that yawning helps to clear out the lungs by periodically lowering the pressure in them. According to this theory, yawning in adults is just a developmental fossil with no biological function. But, while accepting that not everything in life can be explained by Darwinian evolution, there are sound reasons for being skeptical of theories like this one, which avoid the issue of what yawning does for adults. Yawning is distracting, consumes energy and takes time. It is almost certainly doing something significant in adults as well as in fetuses. What could it be?

     The empirical evidence, such as it is, suggests an altogether different function for yawning – namely, that yawning prepares us for a change in activity level. Support for this theory came from a study of yawning behavior in everyday life. Volunteers wore wrist-mounted devices that automatically recorded their physical activity for up to two weeks; the volunteers also recorded their yawns by pressing a button on the device each time they yawned. The data showed that yawning tended to occur about 15 minutes before a period of increased behavioral activity. Yawning bore no relationship to sleep patterns, however. This accords with anecdotal evidence that people often yawn in situations where they are neither tired nor bored, but are preparing for impending mental and physical activity. Such yawning is often referred to as “incongruous” because it seems out of place, at least on the tiredness view: soldiers yawning before combat, musicians yawning before performing, and athletes yawning before competing. Their yawning seems to have nothing to do with sleepiness or boredom – quite the reverse – but it does precede a change in activity level.

 

Lightning

     Lightning is a brilliant flash of light produced by an electrical discharge from a storm cloud. The electrical discharge takes place when the attractive tension between a region of negatively charged particles and a region of positively charged particles becomes so great that the charged particles suddenly rush together. The coming together of the oppositely charged particles neutralizes the electrical tension and releases a tremendous amount of energy, which we see as lightning. The separation of positively and negatively charged particles takes place during the development of the storm cloud.

     The separation of charged particles that forms in a storm cloud has a sandwich-like structure. Concentrations of positively charged particles develop at the top and bottom of the cloud, but the middle region becomes negatively charged.

Recent measurements made in the field together with laboratory simulations offer a promising explanation of how this structure of charged particles forms. What happens is that small (millimeter-to-centimeter-size) pellets of ice form in the cold upper regions of the cloud. When these ice pellets fall, some of them strike much smaller ice crystals in the center of the cloud. The temperature at the center of the cloud is about -15 C or lower. At such temperatures, the collision between the ice pellets and the ice crystals causes electrical charges to shift so that the ice pellets acquire a negative charge and the ice crystals become positively charged. The updraft wind current carry the light, positively charged ice crystals up to the top of the cloud. The heavier, negatively charged ice pellets are left to concentrate in the center. This process explains why the top of the cloud becomes positively charged, while the center becomes negatively charged. The negatively charged region is large: several hundred meters thick and several kilometers in diameter. Below this large, cold, negatively charged region, the cloud is warmer than -15 C, and at these temperatures, collisions between ice crystals and falling ice pellets produce positively charged ice pellets that then populate a small region at the base of the cloud.

     Most lightning talks place within a cloud when the charge separation within the cloud collapses. However, as the storm cloud develops, the ground beneath the cloud becomes positively charged and lightning can take place in the form of an electrical discharge between the negative charge of the cloud and the positively charged ground. Lightning that strikes the ground is the most likely to be destructive, so even though it represents only 20 percent of all lightning, it has received a lot of scientific attention. Using high-speed photography, scientists have determined that there are two steps to the occurrence of lightning from a cloud to the ground. First, a channel, or path, is formed that connect the cloud and the ground. Then a strong current of electrons follows that path from the clouds to the ground, and it is that current that illuminates the channel as the lightning we see.

     The formation of the channel is initiated when electrons surge from the cloud base toward the ground. When a stream of these negatively charged electrons comes within 100 meters of the ground, it is met by a stream of positively charged particles that comes up from the ground. When the negatively and positively charged streams meet, a complete channel connecting the cloud and the ground is formed. The channel is only a few centimeters in diameter, but that is wide enough for electrons to follow the channel to the ground in the visible form of a flash of lightning. The stream of positive particles that meets the surge of electron from the cloud often arises from a tall, pointed structure such as a metal flagpole or a tower. That is why the subsequent lightning that follows the completed channel often strikes a tall structure.

     Once a channel has been formed, it is usually used by several lightning discharges, each of them consisting of a stream of electrons from the cloud meeting a stream of positive particles along the established path. Sometimes, however, a stream of electrons following an established channel is met by a positive stream making a new path up from the ground.  The result is a forked lightning that strikes the ground in two places. 

TPO19

The Roman Army’s Impact on Britain

In the wake of the Roman Empire’s conquest of Britain in the first century A.D., a large number of troops stayed in the new province, and these troops had a considerable impact on Britain with their camps, fortifications, and participation in the local economy. Assessing the impact of the army on the civilian population starts from the realization that the soldiers were always unevenly distributed across the country. Areas rapidly incorporated into the empire were not long affected by the military. Where the army remained stationed, its presence was much more influential. The imposition of a military base involved the requisition of native lands for both the fort and the territory needed to feed and exercise the soldiers’ animals. The imposition of military rule also robbed local leaders of opportunities to participate in local government, so social development was stunted and the seeds of disaffection sown. This then meant that the military had to remain to suppress rebellion and organize government.

Economic exchange was clearly very important as the Roman army brought with it very substantial spending power. Locally a fort had two kinds of impact. Its large population needed food and other supplies.  Some of these were certainly brought from long distances, but demands were inevitably placed on the local area. Although goods could be requisitioned, they were usually paid for, and this probably stimulated changes in the local economy.  When not campaigning, soldiers needed to be occupied; otherwise they represented a potentially dangerous source of friction and disloyalty.  Hence a writing tablet dated 25 April tells of 343 men at one fort engaged on tasks like shoemaking, building a bathhouse, operating kilns, digging clay, and working lead. Such activities had a major effect on the local area, in particular with the construction of infrastructure such as roads, which improved access to remote areas.

Each soldier received his pay, but in regions without a developed economy there was initially little on which it could be spent. The pool of excess cash rapidly stimulated a thriving economy outside fort gates. Some of the demand for the services and goods was no doubt fulfilled by people drawn from far afield, but some local people certainly became entwined in this new economy. There was informal marriage with soldiers, who until AD 197 were not legally entitled to wed, and whole new communities grew up near the forts. These settlements acted like small towns, becoming centers for the artisan and trading populations.

The army also provided a means of personal advancement for auxiliary soldiers recruited from the native peoples, as a man obtained hereditary Roman citizenship on retirement after service in an auxiliary regiment. Such units recruited on an ad hoc (as needed) basis from the area in which they were stationed, and there was evidently large-scale recruitment within Britain. The total numbers were at least 12,500 men up to the reign of the emperor Hadrian (A.D. 117 – 138), with a peak around A.D. 80. Although a small proportion of the total population, this perhaps had a massive local impact when a large proportion of the young men were removed from an area. Newly raised regiments were normally transferred to another province from whence it was unlikely that individual recruits would ever return. Most units raised in Britain went elsewhere on the European continent, although one is recorded in Morocco. The reverse process brought young men to Britain, where many continued to live after their 20 to 25 years of service, and this added to the cosmopolitan Roman character of the frontier population. By the later Roman period, frontier garrisons (groups of soldiers) were only rarely transferred, service in units became effectively hereditary, and forts were no longer populated or maintained at full strength.

This process of settling in as a community over several generations, combined with local recruitment, presumably accounts for the apparent stability of the British northern frontier in the later Roman period. It also explains why some of the forts continued in occupation long after Rome ceased to have any formal authority in Britain, at the beginning of the fifth century A.D. The circumstances that had allowed natives to become Romanized also led the self-sustained military community of the frontier area to become effectively British.

Succession, Climax, and Ecosystems

In the late nineteenth century, ecology began to grow into an independent science from its roots in natural history and plant geography. The emphasis of this new “community ecology” was on the composition and structure of communities consisting of different species. In the early twentieth century, the American ecologist Frederic Clements pointed out that a succession of plant communities would develop after a disturbance such as a volcanic eruption, heavy flood, or forest fire. An abandoned field, for instance, will be invaded successively by herbaceous plants (plants with little or no woody tissue), shrubs, and trees, eventually becoming a forest. Light-loving species are always among the first invaders, while shade-tolerant species appear later in the succession.

Clements and other early ecologists saw almost lawlike regularity in the order of succession, but that has not been substantiated. A general trend can be recognized, but the details are usually unpredictable. Succession is influenced by many factors: the nature of the soil, exposure to sun and wind, regularity of precipitation, chance colonizations, and many other random processes.

The final stage of a succession, called the climax by Clements and early ecologists, is likewise not predictable or of uniform composition. There is usually a good deal of turnover in species composition, even in a mature community. The nature of the climax is influenced by the same factors that influenced succession. Nevertheless, mature natural environments are usually in equilibrium. They change relatively little through time unless the environment itself changes.

For Clements, the climax was a “superorganism,” an organic entity. Even some authors who accepted the climax concept rejected Clements’ characterization of it as a superorganism, and it is indeed a misleading metaphor. An ant colony may be legitimately called a superorganism because its communication system is so highly organized that the colony always works as a whole and appropriately according to the circumstances. But there is no evidence for such an interacting communicative network in a climax plant formation. Many authors prefer the term “association” to the term “community” in order to stress the looseness of the interaction.

Even less fortunate was the extension of this type of thinking to include animals as well as plants. This resulted in the “biome,” a combination of coexisting flora and fauna. Though it is true that many animals are strictly associated with certain plants, it is misleading to speak of a “spruce-moose biome,” for example, because there is no internal cohesion to their association as in an organism. The spruce community is not substantially affected by either the presence or absence of moose. Indeed, there are vast areas of spruce forest without moose. The opposition to the Clementsian concept of plant ecology was initiated by Herbert Gleason, soon joined by various other ecologists. Their major point was that the distribution of a given species was controlled by the habitat requirements of that species and that therefore the vegetation types were a simple consequence of the ecologies of individual plant species.

With “climax,” “biome,” “superorganism,” and various other technical terms for the association of animals and plants at a given locality being criticized, the term “ecosystem” was more and more widely adopted for the whole system of associated organisms together with the physical factors of their environment. Eventually, the energy-transforming role of such a system was emphasized. Ecosystems thus involve the circulation, transformation, and accumulation of energy and matter through the medium of living things and their activities. The ecologist is concerned primarily with the quantities of matter and energy that pass through a given ecosystem, and with the rates at which they do so.

Although the ecosystem concept was very popular in the 1950s and 1960s, it is no longer the dominant paradigm.  Gleason’s arguments against climax and biome are largely valid against ecosystems as well.  Furthermore, the number of interactions is so great that they are difficult to analyze, even with the help of large computers. Finally, younger ecologists have found ecological problems involving behavior and life-history adaptations more attractive than measuring physical constants.  Nevertheless, one still speaks of the ecosystem when referring to a local association of animals and plants, usually without paying much attention to the energy aspects. 

Discovering the Ice Ages

      

In the middle of the nineteenth century, Louis Agassiz, one of the first scientists to study glaciers, immigrated to the United States from Switzerland and became a professor at Harvard University, where he continued his studies in geology and other sciences. For his research, Agassiz visited many places in the northern parts of Europe and North America, from the mountains of Scandinavia and New England to the rolling hills of the American Midwest.  In all these diverse regions, Agassiz saw signs of glacial erosion and sedimentation.  In flat plains country, he saw moraines (accumulations of earth and loose rock that form at the edges of glaciers) that reminded him of the terminal moraines found at the end of valley glaciers in the Alps.  The heterogeneous material of the drift (sand, clay, and rocks deposited there) convinced him of its glacial origin. 

The areas covered by this material were so vast that the ice that deposited it must have been a continental glacier larger than Greenland or Antarctica. Eventually, Agassiz and others convinced geologists and the general public that a great continental glaciation had extended the polar ice caps far into regions that now enjoy temperate climates. For the first time, people began to talk about ice ages. It was also apparent that the glaciation occurred in the relatively recent past because the drift was soft, like freshly deposited sediment. We now know the age of the glaciation accurately from radiometric dating of the carbon – 14 in logs buried in the drift. The drift of the last glaciation was deposited during one of the most recent epochs of geologic time, the Pleistocene, which lasted from 1.8 million to 10,000 years ago. Along the east coast of the United States, the southernmost advance of this ice is recorded by the enormous sand and drift deposits of the terminal moraines that form Long Island and Cape Cod.

It soon became clear that there were multiple goacial ages during the Pleistocene, with warmer interglacial intervals between them. As geologists mapped glacial deposits in the late nineteenth century, they became aware that there were several layers of drift, the lower ones corresponding to earlier ice ages. Between the older layers of glacial material were well-developed soils containing fossils of warm-climate plants. These soils were evidence that the glaciers retreated as the climate warmed. By the early part of the twentieth century, scientists believed that four distinct glaciations had affected North America and Europe during the Pleistocene epoch.

This idea was modified in the late twentieth century, when geologists and oceanographers examining oceanic sediment found fossil evidence of warming and cooling of the oceans. Ocean sediments presented a much more complete geologic record of the Pleistocene than continental glacial deposits did. The fossils buried in Pleistocene and earlier ocean sediments were of foraminifera – small, single-celled marine organisms that secrete shells of calcium carbonate, or calcite. These shells differ in their proportion of ordinary oxygen (oxygen – 16) and the heavy oxygen isotope (oxygen – 18). The ratio of oxygen-16 to oxygen-18 found in the calcite of a foraminifer’s shell depends on the temperature of the water in which the organism lived. Different ratios in the shells preserved in various layers of sediment reveal the temperature changes in the oceans during the Pleistocene epoch.

Isotopic analysis of shells allowed geologists to measure another glacial effect. They could trace the growth and shrinkage of continental glaciers, even in parts of the ocean where there may have been no great change in temperature – around the equator, for example. The oxygen isotope ratio of the ocean changes as a great deal of water is withdrawn from it by evaporation and is precipitated as snow to form glacial ice. During glaciations, the lighter oxygen – 16 has a greater tendency to evaporate from the ocean surface than the heavier oxygen – 18 does. Thus, more of the heavy isotope is left behind in the ocean and absorbed by marine organisms. From this analysis of marine sediments, geologists have learned that there were many shorter, more regular cycles of glaciation and deglaciation than geologists had recognized from the glacial drift of the continents alone.

TPO20

Westward Migration

    The story of the westward movement of population in the United States is, in the main, the story of the expansion of American agriculture – of the development of new areas for the raising of livestock and the cultivation of wheat, corn, tobacco, and cotton. After 1815 improved transportation enabled more and more western farmers to escape a self-sufficient way of life and enter a national market economy. During periods when commodity prices were high, the rate of westward migration increased spectacularly. “Old America seemed to be breaking up and moving westward,” observed an English visitor in 1817, during the first great wave of migration. Emigration to the West reached a peak in the 1830’s. Whereas in 1810 only a seventh of the American people lived west of the Appalachian Mountains, by 1840 more than third lived there.

    Why were these hundreds of thousands of settlers – most of them farmers, some of them artisans – drawn away from the cleared fields and established cities and villages of the East? Certain characteristics of American society help to explain this remarkable migration. The European ancestors of some Americans had for centuries lived rooted to the same village or piece of land until some religious, political, or economic crisis uprooted them and drove them across the Atlantic. Many of those who experienced this sharp break thereafter lacked the ties that had bound them and their ancestors to a single place. Moreover, European society was relatively stratified; occupation and social status were inherited. In American society, however, the class structure was less rigid; some people changed occupations easily and believed it was their duty to improve their social and economic position. As a result, many Americans were an inveterately restless, rootless, and ambitious people. Therefore, these social traits helped to produce the nomadic and daring settlers who kept pushing westward beyond the fringes of settlement. In addition, there were other immigrants who migrated west in search of new homes, material success, and better lives.

    The West had plenty of attractions: the alluvial river bottoms, the fecund soils of the rolling forest lands, the black loams of the prairies were tempting to New England farmers working their rocky, sterile land and to southeastern farmers plagued with soil depletion and erosion. In 1820 under a new land law, a farm could be bought for $100. The continued proliferation of banks made it easier for those without cash to negotiate loans in paper money. Western farmers borrowed with the confident expectation that the expanding economy would keep farm prices high, thus making it easy to repay loans when they fell due.

    Transportation was becoming less of a problem for those who wished to move west and for those who had farm surplus to send to market.  Prior to 1815, western farmers who did not live on navigable waterways were connected to them only by dirt roads and mountain trails.  Livestock could be driven across the mountains, but the cost of transporting bulky grains in this fashion was several times greater than their value in eastern markets.  The first step toward an improvement of western transportation was the construction of turnpikes.  These roads made possible a reduction in transportation costs and thus stimulated the commercialization of agriculture along their routes.

    Two other developments presaged the end of the era of turnpikes and started a transportation revolution that resulted in increased regional specialization and the growth of a national market economy. First came the steamboat; although flatboats and keelboats continued to be important until the 1850’s, steamboats eventually superseded all other craft in the carrying of passengers and freight. Steamboats were not only faster but also transported upriver freight for about one tenth of what it had previously cost on hard-propelled keelboats. Next came the Erie Canal, an enormous project in its day, spanning about 350 miles. After the canal went into operation, the cost per mile of transporting a ton of freight from Buffalo to New York City declined from nearly 20 cents to less than 1 cent. Eventually, the western states diverted much of their produce from the rivers to the Erie Canal, a shorter route to eastern markets.

Early Settlement in Southwest Asia

    The universal global warming at the end of the Ice Age had dramatic effects on temperate regions of Asia, Europe, and North America. Ice sheets retreated and sea levels rose.  The climatic changes in southwestern Asia were more subtle, in that they involved shifts in mountain snow lines, rainfall patterns, and vegetation cover.  However, these same cycles of change had momentous impacts on the sparse human populations of the region.  At the end of the Ice Age, no more than a few thousand foragers lived along the eastern Mediterranean coast, in the Jordan and Euphrates valleys. Within 2,000 years, the human population of the region numbered in the tens of thousands, all as a result of village life and farming.  Thanks to new environmental and archaeological discoveries, we now know something about this remarkable change in local life.

    Pollen samples from freshwater lakes in Syria and elsewhere tell use forest cover expanded rapidly at the end of the Ice Age, for the southwestern Asian climate was still cooler and considerably wetter than today. Many areas were richer in animal and plant species than they are now, making them highly favorable for human occupation. About 9000 B.C., most human settlements lay in the area along the Mediterranean coast and in the Zagros Mountains of Iran and their foothills. Some local areas, like the Jordan River valley, the middle Euphrates valley, and some Zagros valleys, were more densely populated than elsewhere. Here more sedentary and more complex societies flourished. These people exploited the landscape intensively, foraging on hill slopes for wild cereal grasses and nuts, while hunting gazelle and other game on grassy lowlands and in river valleys. Their settlements contain exotic objects such as seashells, stone bowls, and artifacts made of obsidian (volcanic glass), all traded from afar. This considerable volume of intercommunity exchange brought a degree of social complexity in its wake.

    Thanks to extremely fine-grained excavation and extensive use of flotation methods (through which seeds are recovered from soil samples), we know a great deal about the foraging practices of the inhabitants of Abu Hureyra in Syria’s Euphrates valley. Abu Hureyra was founded about 9500 B.C., a small village settlement of cramped pit dwellings (houses dug partially in the soil) with reed roofs supported by wooden uprights. For the next 1,500 years, its inhabitants enjoyed a somewhat warmer and damper climate than today, living in a well-wooded steppe area where wild cereal grasses were abundant. They subsisted off spring migrations of Persian gazelle from the south. With such a favorable location, about 300 to 400 people lived in a sizable, permanent settlement. They were no longer a series of small bands but lived in a large community with more elaborate social organization, probably grouped into clans of people of common descent.

    The flotation samples from the excavations allowed botanists to study shifts in plant-collecting habits as if they were looking through a telescope at a changing landscape. Hundreds of tiny plant remains show how the inhabitants exploited nut harvest in nearby pistachio and oak forests. However, as the climate dried up, the forests retreated from the vicinity of the settlement. The inhabitants turned to wild cereal grasses instead, collecting them by the thousands, while the percentage of nuts in the diet fell. By 8200 B.C., drought conditions were so severe that the people abandoned their long-established settlement, perhaps dispersing into smaller camps.

    Five centuries later, about 7700 B.C., a new village rose on the mound. At first the inhabitants still hunted gazelle intensively. Then, about 7000 B.C., within the space of a few generations, they switched abruptly to herding domesticated goats and sheep and to growing einkorn, pulses, and other cereal grasses. Abu Hureyra grew rapidly until it covered nearly 30 acres. It was a close-knit community of rectangular, one-story mud-brick houses, joined by narrow lanes and courtyards, finally abandoned about 5000 B.C.. Many complex factors led to the adoption of the new economies, not only at Abu Hureyra, but at many other locations such as Ain Ghazal, also in Syria, where goat toe bones showing the telltale marks of abrasion caused by foot tethering (binding) testify to early herding of domestic stock.

Fossil Preservation

    When one considers the many ways by which organisms are completely destroyed after death, it is remarkable that fossils are as common as they are. Attack by scavengers and bacteria, chemical decay, and destruction by erosion and other geologic agencies make the odds against preservation very high. However, the chances of escaping complete destruction are vastly improved if the organism happens to have a mineralized skeleton and dies in a place where it can be quickly buried by sediment. Both of these conditions are often found on the ocean floors, where shelled invertebrates (organisms without spines) flourish and are covered by the continuous rain of sedimentary particles. Although most fossils are found in marine sedimentary rocks, they also are found in terrestrial deposits left by streams and lakes. On occasion, animals and plants have been preserved after becoming immersed in tar or quicksand, trapped in ice or lava flows, or engulfed by rapid falls of volcanic ash.

    The term “fossil” often implies petrifaction, literally a transformation into stone. After the death of an organism, the soft tissue is ordinarily consumed by scavengers and bacteria. The empty shell of a snail or clam may be left behind, and if it is sufficiently durable and resistant to dissolution, it may remain basically unchanged for a long period of time. Indeed, unaltered shells of marine invertebrates are known from deposits over 100 million years old. In many marine creatures, however, the skeleton is composed of a mineral variety of calcium carbonate called aragonite. Although aragonite has the same composition as the more familiar mineral known as calcite, it has a different crystal form, is relatively unstable, and in time changes to the more stable calcite.

    Many other processes may after the shell of a clam or snail and enhance its chances for preservation. Water containing dissolved silica, calcium, carbonate, or iron may circulate through the enclosing sediment and be deposited in cavities such as marrow cavities and canals in bone once occupied by blood vessels and nerves. In such cases, the original composition of the bone or shell remains, but the fossil is made harder and more durable. This addition of a chemically precipitated substance into pore spaces is termed “permineralization.”

    Petrifaction may also involve a simultaneous exchange of the original substance of a dead plant or animal with mineral matter of a different composition. This process is termed “replacement” because solutions have dissolved the original material and replaced it with an equal volume of the new substance. Replacement can be a marvelously precise process, so that details of shell ornamentation, tree rings in wood, and delicate structures in bones are accurately preserved.

    Another type of fossilization, known as carbonization, occurs when soft tissues are preserved as thin films of carbon.  Leaves and tissues of soft-bodied organisms such as jellyfish or worms may accumulate, become buried and compressed, and lose their volatile constituents.  The carbon often remains behind as a blackened silhouette. 

    Although it is certainly true that the possession of hard parts enhances the prospect of preservation, organisms having soft tissues and organs are also occasionally preserved. Insects and even small invertebrates have been found preserved in the hardened resins of conifers and certain other trees. X-ray examination of thin slabs of rock sometimes revels the ghostly outlines of tentacles, digestive tracts, and visual organs of a variety of marine creatures. Soft parts, including skin, hair, and viscera of ice age mammoths, have been preserved in frozen soil or in the oozing tar of oil seeps.

    The probability that actual remains of soft tissue will be preserved is improved if the organism dies in an environment of rapid deposition and oxygen deprivation. Under such conditions, the destructive effects of bacteria are diminished. The Middle Eocene Messel Shale (from about 48 million years ago) of Germany accumulated in such an environment. The shale was deposited in an oxygen-deficient lake where lethal gases sometimes bubbled up and killed animals. Their remains accumulated on the floor of the lake and were then covered by clay and silt. Among the superbly preserved Messel fossils are insects with iridescent exoskeletons (hard outer coverings), frogs with skin and blood vessels intact, and even entire small mammals with preserved fur and soft tissue.

TPO21

Geothermal Energy

    Earth’s internal heat, fueled by radioactivity, provides the energy for plate tectonics and continental drift, mountain building, and earthquakes. It can also be harnessed to drive electric generators and heat homes. Geothermal energy becomes available in a practical form when underground heat is transferred by water that is heated as it passes through a subsurface region of hot rocks (a heat reservoir) that may be hundreds or thousands of feet deep.  The water is usually naturally occurring groundwater that seeps down along fractures in the rock; less typically, the water is artificially introduced by being pumped down from the surface.  The water is brought to the surface, as a liquid or steam, through holes drilled for the purpose. 

    By far the most abundant form of geothermal energy occurs at the relatively low temperatures of 80 ℃ to 180 ℃ centigrade. Water circulated through heat reservoirs in this temperature range is able to extract enough heat to warm residential, commercial, and industrial spaces. More than 20,000 apartments in France are now heated by warm underground water drawn from a heat reservoir in a geologic structure near Paris called the Paris Basin. Iceland sits on a volcanic structure known as the Mid-Atlantic Ridge. Reykjavik, the capital of Iceland, is entirely heated by geothermal energy derived from volcanic heat.

    Geothermal reservoirs with temperatures above 180℃ centigrade are useful for generating electricity. They occur primarily in regions of recent volcanic activity as hot, dry rock; natural hot water; or natural steam. The latter two sources are limited to those few areas where surface water seeps down through underground faults or fractures to reach deep rocks heated by the recent activity of molten rock material. The world’s largest supply of natural steam occurs at The Geysers, 120 kilometers north of San Francisco, California. In the 1990s enough electricity to meet about half the needs of San Francisco was being generated there. This facility was then in its third decade of production and was beginning to show signs of decline, perhaps because of over development. By the late 1990s some 70 geothermal electric-generating plants were in operation in California, Utah, Nevada, and Hawaii, generating enough power to supply about a million people. Eighteen countries now generate electricity using geothermal heat.

    Extracting heat from very hot, dry rocks presents a more difficult problem: the rocks must be fractured to permit the circulation of water, and the water must be provided artificially. The rocks are fractured by water pumped down at very high pressures. Experiments are under way to develop technologies for exploiting this resource.

    Like most other energy sources, geothermal energy presents some environmental problems. The surface of the ground can sink if hot groundwater is withdrawn without being replaced. In addition, water heated geothermally can contain salts can toxic materials dissolved from the hot rock. These waters present a disposal problem if they are not returned to the ground from which they were removed.

    The contribution of geothermal energy to the world’s energy future is difficult to estimate. Geothermal energy is in a sense not renewable, because in most cases the heat would be drawn out of a reservoir much more rapidly than it would be replaced by the very slow geological processes by which heat flows through solid rock into a heat reservoir. However, in many places (for example, California, Hawaii, the Philippines, Japan, Mexico, the rift valleys of Africa) the resource is potentially so large that its future will depend on the economics of production. At present, we can make efficient use of only naturally occurring hot water or steam deposits. Although the potential is enormous, it is likely that in the near future geothermal energy can make important local contributions only where the resource is close to the user and the economics are favorable, as they are in California, New Zealand, and Iceland. Geothermal energy probably will not make large-scale contributions to the world energy budget until well into the twenty-first century, if ever.

The Origins of Agriculture

    How did it come about that farming developed independently in a number of world centers (the Southeast Asian mainland, Southwest Asia, Central America, lowland and high land South America, and equatorial Africa) at more or less the same time? Agriculture developed slowly among populations that had an extensive knowledge of plants and animals.  Changing from hunting and gathering to agriculture had no immediate advantages.  To start with, it forced the population to abandon the nomad’s life and become sedentary, to develop methods of storage and, often, systems of irrigation.  While hunter-gatherers always had the option of moving elsewhere when the recourses were exhausted, this became more difficult with farming.  Furthermore, as the archaeological record shows, the state of health of agriculturalists was worse than that of their contemporary hunger-gatherer.

    Traditionally, it was believed that the transition to agriculture was the result of a worldwide population crisis. It was argued that once hunter-gatherers had occupied the whole world, the population started to grow everywhere and food became scarce; agriculture would have been a solution to this problem. We know, however, that contemporary hunter-gatherer societies control their population in a variety of ways. The idea of a world population crisis is therefore unlikely, although population pressure might have arisen in some areas.

    Climatic changes at the end of the glacial period 13,000 years ago have been proposed to account for the emergence of farming. The temperature increased dramatically in a short period of time (years rather than centuries) allowing for a growth of the hunting-gathering population due to the abundance of resources. There were, however, fluctuations in the climatic conditions, with the consequences that wet conditions were followed by dry ones, so that the availability of plants and animals oscillated brusquely.

    It would appear that the instability of the climatic conditions led populations that had originally been nomadic to settle down and develop a sedentary style of life, which led in turn to population growth and to the need to increase the amount of food available. Farming originated in these conditions. Later on, it became very difficult to change because of the significant expansion of these populations. It could be argued, however, that these conditions are not sufficient to explain the origins of agriculture. Earth had experienced previous periods of climatic change, and yet agriculture had not been developed.

    It is archaeologist Steven Mithen’s thesis brilliantly developed in his book The Prehistory of the Mind (1996), that approximately 40,000 years ago the human mind developed cognitive fluidity, that is, the integration of the specializations of the mind: technical, natural history (geared to understanding the behavior and distribution of natural resources), social intelligence, and the linguistic capacity. Cognitive fluidity explains the appearance of art, religion, and sophisticated speech. Once humans possessed such a mind, they were able to find an imaginative solution to a situation of severe economic crisis such as the farming dilemma described earlier. Mithen proposes the existence of four mental elements to account for the emergence of farming: (1) the ability to develop tools that could be used intensively to harvest and process plant resources; (2) the tendency to use plants and animals as the medium to acquire social prestige and power; (3) the tendency to develop “social relationships” with animals structurally similar to those developed with people – specifically, the ability to think of animals as people (anthropomorphism) and of people as animals (totemism), and (4) the tendency to manipulate plants and animals.

    The fact that some societies domesticated animals and plants, discovered the use of metal tools, became literate, and developed a state should not make us forget that others developed pastoralism or horticulture (vegetable gardening) but remained illiterate and at low levels of productivity, a few entered the modern period as hunting and gathering societies. It is anthropologically important to inquire into the conditions that made some societies adopt agriculture while others remained hunter-gatherers or horticulturalists. However, it should be kept in mind that many societies that knew of agriculture more or less consciously avoided it. Whether Mithen’s explanation is satisfactory is open to contention, and some authors have recently emphasized the importance of other factors.

Autobiographical Memory

    Think back to your childhood and try to identify your earliest memory. How old were you?  Most people are not able to recount memories for experiences prior to the age of three years, a phenomenon called infantile amnesia. The question of why infantile amnesia occurs has intrigued psychologists for decades, especially in light of ample evidence that infants and young children can display impressive memory capabilities.  Many find that understanding the general nature of autobiographical memory, that is, memory for events that have occurred in one’s own life, can provide some important clues to this mystery.  Between ages three and four, children begin to give fairly lengthy and cohesive descriptions of events in their past. What factors are responsible for this developmental turning point?

    Perhaps the explanation goes back to some ideas raised by influential Swiss psychologist Jean Piaget – namely, that children under age two years represent events in a qualitatively different form than older children do. According to this line of thought, the verbal abilities that blossom in the two year old allow events to be coded in a form radically different from the action-based codes of the infant. Verbal abilities of one year olds are, in fact, related to their memories for events one year later. When researchers had one year olds imitate an action sequence one year after they first saw it, there was correlation between the children’s verbal skills at the time they first saw the event and their success on the later memory task. However, even children with low verbal skills showed evidence of remembering the event, thus, memories may be facilitated by but are not dependent on those verbal skills.

    Another suggestion is that before children can talk about past events in their lives, they need to have reasonable understanding of the self as a psychological entity. The development of an understanding of the self becomes evident between the first and second years of life and shows rapid elaboration in subsequent years. The realization that the physical self has continuity in time, according to this hypothesis, lays the foundation for the emergence of autobiographical memory.

    A third possibility is that children will not be able to tell their own “life story” until they understand something about the general form stones take, that is, the structure of narrative. Knowledge about narratives arises from social interaction, particularly the storytelling that children experience from parents and the attempts parents make to talk with children about past events in their lives. When parents talk with children about “what we did today” or “last week” or “last year” they guide the children’s formation of a framework for talking about the past. They also provide children with reminders about the memory and relay the message that memories are valued as part of the cultural experience. It is interesting to note that some studies show Caucasian American children have earlier childhood memories than Korean children do. Furthermore, other studies show that Caucasian American mother-child pairs talk about past events three times more often than do Korean mother-child pairs. Thus, the types of social experiences children have do factor into the development of autobiographical memories.

    A final suggestion is that children must begin to develop a “theory of mind” – an awareness of the concept of mental states (feelings, desires, beliefs, and thoughts), their own and those of others – before they can talk about their own past memories. Once children become capable of answering such questions as “What does it mean to remember?” and “What does it mean to know something?” improvements in memory seem to occur.

    It may be that the developments just described are intertwined with and influence one another. Talking with parents about the past may enhance the development of the self-concept, for example, as well as help child understand what it means to “remember.” No doubt the ability to talk about one’s past represents memory of a different level of complexity than simple recognition or recall.

TPO22

Spartine

Spartine alterniflora, known as cordgrass, is a deciduous, perennial flowering plant native to the Atlantic coast and the Gulf Coast of the United States. It is the dominant native species of the lower salt marshes along these coasts, where it grows in the intertidal zone (the area covered by water some parts of the day and exposed others.)

    These natural salt marshes are among the most productive habitats in the marine environment. Nutrient-rich water is brought to the wetlands during each high tide, making a high rate of food production possible. As the seaweed and marsh grass leaves die, bacteria break down the plant material, and insects, small shrimplike organisms, fiddler crabs, and marsh snails eat the decaying plant tissue, digest it, and excrete wastes high in nutrients. Numerous insects occupy the marsh, feeding on living or dead cordgrass tissue, and redwing blackbirds, sparrows, rodents, rabbits, and deer feed directly on the cordgrass. Each tidal cycle carries plant materials into the offshore water to be used by the subtidal organisms.

    Spartina is an exceedingly competitive plant.  It spreads primarily by underground stems; colonies form when pieces of the root system or whole plants float into an area and take root or when seeds float into a suitable area and germinate. Spartina establishes itself on substrates ranging from sand and silt to gravel and cobble and is tolerant of salinities ranging from that of near freshwater (0.05 percent) to that of salt water (3.5 percent).  Because they lack oxygen, marsh sediments are high in sulfides that are toxic to most plants.  Spartina has the ability to take up sulfides and convert them to sulfate, a form of sulfur that the plant can use; this ability makes it easier for the grass to colonize marsh environments. Another adaptive advantage is Spartina’s ability to use carbon dioxide more efficiently than most other plants.

    These characteristics make Spartina a valuable component of the estuaries where it occurs naturally. The plant functions as a stabilizer and a sediment trap and as a nursery area for estuarine fish and shellfish. Once established, a stand of Spartina begins to trap sediment, changing the substrate elevation, and eventually the stand evolves into a high marsh system where Spartina is gradually displaced by higher-elevation, brackish-water species. As elevation increases, narrow, deep channels of water form throughout the marsh. Along the east coast Spartina is considered valuable for its ability to prevent erosion and marshland deterioration; it is also used to coastal restoration projects and the creation of new wetland sites.

    Spartina was transported to Washington State in packing materials for oysters transplanted from the east coast in 1894. Leaving its insect predators behind, the cordgrass has been spreading slowly and steadily along Washington’s tidal estuaries on the west coast, crowding out the native plants and drastically altering the landscape by trapping sediment. Spartina modifies tidal mudflats, turning them into high marshes inhospitable to the many fish and waterfowl that depend on the mudflats. It is already hampering the oyster harvest and the Dungeness crab fishery, and it interferes with the recreational use of beaches and waterfronts. Spartina has been transplanted to England and to New Zealand for land reclamation and shoreline stabilization. In New Zealand the plant has spread rapidly, changing mudflats with marshy fringes to extensive salt meadows and reducing the number and kinds of birds and animals that use the marsh.

    Efforts to control Spartina outside its natural environment have included burning, flooding, shading plants with black canvas or plastic, smothering the plants with dredged materials or clay, applying herbicide, and mowing repeated. Little success has been reported in New Zealand and England; Washington State’s management program has tried many of these methods and is presently using the herbicide glyphosphate to control its spread. Work has begun to determine the feasibility of using insects as biological controls, but effective biological controls are considered years away. Even with a massive effort, it is doubtful that complete eradication of Spartina from nonnative habitats is possible, for it has become an integral part of these shorelines and estuaries during the last 100 to 200 years.

The Birth of Photography

    Perceptions of the visible world were greatly altered by the invention of photography in the middle of the nineteenth century. In particular, and quite logically, the art of painting was forever changed, though not always in the ways one might have expected. The realistic and naturalistic painters of the mid- and late-nineteenth century were all intently aware of photography – as a thing to use, to learn from, and react to.

    Unlike most major inventions, photography had been long and impatiently awaited. The images produced by the camera obscura, a boxlike device that used a pinhole or lens to throw and image onto a ground-glass screen or a piece of white paper, were already familiar – the device had been much employed by topographical artists like the Italian painter Canaletto in his detailed views of the city of Venice. What was lacking was a way of giving such images permanent form. This was finally achieved by Louis Daguerre (1787 – 1851), who perfected a way of fixing them on a silvered copper plate. His discovery, the “daguerreotype,” was announced in 1839. 

    A second and very different process was patented by the British inventor William Henry Talbot (1800 – 1877) in 1841.  Talbot’s “calotype” was the first negative-to-positive process and the direct ancestor of the modern photograph. The calotype was revolutionary in its use of chemically treated paper in which areas hit by light became dark in tone, producing a negative image.  This “negative”, as Talbot called it, could then be used to print multiple positive images on another piece of treated paper. 

    The two processes produced very different results. The daguerreotype was a unique image that reproduced what was in front of the camera lens in minute, unselective detail and could not be duplicated. The calotype could be made in series, and was thus the equivalent of an etching or an engraving. Its general effect was soft edged and tonal.

    One of the things that most impressed the original audience for photography was the idea of authenticity. Nature now seemed able to speak for itself, with a minimum of interference. The title Talbot chose for his book, The pencil of Nature (the first part of which was published in 1844), reflected this feeling. Artists were fascinated by photography because it offered a way of examining the world in much greater detail. They were also afraid of it, because it seemed likely to make their own efforts unnecessary.

    Photograph did indeed make certain kinds of painting obsolete – the daguerreotype virtually did away with the portrait miniature. It also made the whole business of making and owning images democratic. Portraiture, once a luxury for the privileged few, was suddenly well within the reach of many more people.

    In the long term, photography’s impact on the visual arts was far from simple. Because the medium was so prolific, in the sense that it was possible to produce a multitude of images very cheaply, it was soon treated as the poor relation of the fine art, rather than its destined successor. Even those artists who were most dependent on photography became reluctant to admit that they made use of it, in case this compromised their professional standing.

    The rapid technical development of photography – the introduction of lighter and simpler equipment, and of new emulsions that coated photographic plates, film, and paper and enabled images to be made at much faster speeds – had some unanticipated consequences. Scientific experiments made by photographers such as Eadweard Muybridge (1830 – 1904) and Etienne-Jules Marey (1830 – 1904) demonstrated that the movements of both humans and animals differed widely from the way they had been traditionally represented in art. Artists, often reluctantly, were forced to accept the evidence provided by the camera. The new candid photography – unposed pictures that were made when the subjects were unaware that their pictures were being taken – confirmed these scientific results, and at the same time, thanks to the radical cropping (trimming) of images that the camera often imposed, suggested new compositional formats. The accidental effects obtained by candid photographers were soon being copied by artists such as the French painter Degas.

The Allende Meteorite

    Sometime after midnight on February 8, 1969, a large, bright meteor entered Earth’s atmosphere and broke into thousands of pieces, plummeted to the ground, and scattered over an area 50 miles long and 10 miles wide in the state of Chihuahua in Mexico. The first meteorite from this fall was found in the village of Pueblito de Allende. Altogether, roughly two tons of meteorite fragments were recovered, all of which bear the name Allende for the location of the first discovery.

    Individual specimens of Allende are covered with a black, glassy crust that formed when their exteriors melted as they were slowed by Earth’s atmosphere. When broken open, Allende stones are revealed to contain an assortment of small, distinctive objects, spherical or irregular in shape and embedded in a dark gray matrix (binding material), which were once constituents of the solar nebula – the interstellar cloud of gas and dust out of which our solar system was formed.

    The Allende meteorite is classified as a chondrite. Chondrites take their name tfrom the Greek word chondros – meaning “seed” – an allusion to their appearance as rocks containing tiny seeds. These seeds are actually chondrules: millimeter-sized melted droplets of silicate material that were cooled into spheres of glass and crystal. A few chondrules contain grains that survived the melting event, so these enigmatic chondrules must have formed when compact masses of nebular dust were fused at high temperatures – approaching 1,700 degrees Celsius – and then cooled before these surviving grains could melt. Study of the textures of chondrules confirms that they cooled rather quickly, in times measure in minutes or hours, so the heating events that formed them must have been localized. It seems very unlikely that large portions of the nebula were heated to such extreme temperatures, and huge nebula areas could not possibly have lost heat so fast. Chondrules must have been melted in small pockets of the nebula that were able to lose heat rapidly. The origin of these peculiar glassy spheres remains an enigma.

    Equally perplexing constituents of Allende are the refractory inclusions: irregular white masses that tend to be larger than chondrules.  They are composed of minerals uncommon on Earth, all rich in calcium, aluminum, the most refractory (resistant to melting) of the major elements in the nebula.  The same minerals that occur in refractory inclusions are believed to be the earliest-formed substances to have condensed out of the solar nebula.  However, studies of the textures of inclusions reveal that the order in which the minerals appeared in the inclusions varies from inclusion to inclusion, and often does not match the theoretical condensation sequence for those metals. 

     Chondrules and inclusions in Allende are held together by the chondrite matrix, a mixture of fine-grained, mostly silicate minerals that also includes grains of iron metal and iron sulfide. At one time it was thought that these matrix grains might be pristine nebular dust, the sort of stuff from which chondrules and inclusions were made. However, detailed studies of the chondrite matrix suggest that much of it, too, has been formed by condensation or melting in the nebula, although minute amounts of surviving interstellar dust are mixed with the processed materials.

     All these diverse constituents are aggregated together to form chondritic meteorites, like Allende, that have chemical compositions much like that of the Sun. To compare the compositions of a meteorite and the Sun, it is necessary that we use ratios of elements rather than simply the abundances of atoms. After all, the Sun has many more atoms of any element, say iron, than does a meteorite specimen, but the ratios of iron to silicon in the two kinds of matter might be comparable. The compositional similarity is striking. The major difference is that Allende is depleted in the most volatile elements, like hydrogen carbon, oxygen nitrogen, and the noble gases, relative to the Sun. These are the elements that tend to form gases even at very low temperatures. We might think of chondrites as samples of distilled Sun, a sort of solar sludge from which only gases have been removed. Since practically all the solar system’s mass resides in the Sun, this similarity in chemistry means that chondrites have average solar system composition, except for the most volatile elements; they are truly lumps of nebular matter, probably similar in composition to the matter from which planets were assembled.

TPO23

Urban Climates

    The city is an extraordinary processor of mass and energy and has its own metabolism. A daily input of water, food, and energy of various kinds is matched by an output of sewage, solid waste, air pollutants, energy, and materials that have been transformed in some way. The quantities involved are enormous. Many aspects of this energy use affect the atmosphere of a city, particularly in the production of heat.

    In winter the heat produced by a city can equal or surpass the amount of heat available from the Sun. All the heat that warms a building eventually transfers to the surrounding air, a process that is quickest where houses are poorly insulated. But an automobile produces enough heat to warm an average house in winter, and if a house were perfectly insulated, one adult could also produce more than enough heat to warm it. Therefore, even without any industrial production of heat, an urban area tends to be warmer than the countryside that surrounds it.

    The burning of fuel, such as by cars, is not the only source of this increased heat. Two other factors contribute to the higher overall temperature in cities. The first is the heat capacity of the materials that constitute the city, which is typically dominated by concrete and asphalt. During the day, heat from the Sun can be conducted into these materials and stored – to be released at night. But in the countryside materials have a significantly lower heat capacity because a vegetative blanket prevents heat from easily flowing into and out of the ground. The second factor is that radiant heat coming into the city from the Sun is trapped in two ways: (1) by a continuing series of reflections among the numerous vertical surfaces that buildings present and (2) by the dust dome the cloudlike layer of polluted air that most cities produce. Shortwave radiation from the Sun passes through the pollution dome more easily than outgoing longwave radiation does; the latter is absorbed by the gaseous pollutants of the dome and reradiated back to the urban surface.

    Cities, then, are warmer than the surrounding rural areas, and together they produce a phenomenon known as the urban heat island. Heat islands develop best under particular conditions associated with light winds, but they can form almost any time.  The precise configuration of a heat island depends on several factors.  For example, the wind can make a heat island stretch in the direction it blows.  When a heat island is well developed, variations can be extreme; in winter, busy streets in cities can be 17℃ warmer than the side streets.  Areas near traffic lights can be similarly warmer than the areas between them because of the effect of cars standing in traffic instead of moving. The maximum differences in temperature between neighboring urban and rural environments is called the heat-island intensity for that region. In general, the larger the city, the greater its heat-island intensity. The actual level of intensity depends on such factors as the physical layout, population density, and productive activities of a metropolis.

    The surface-atmosphere relationships inside metropolitan areas produce a number of climatic peculiarities. For one thing, the presence or absence of moisture is affected by the special qualities of the urban surface. With much of the built-up landscape impenetrable by water, even gentle rain runs off almost immediately from rooftops, streets, and parking lots. Thus, city surfaces, as well as the air above them, tend to be drier between episodes of rain; with little water available for the cooling process of evaporation, relative humidities are usually lower. Wind movements are also modified in cities because buildings increase the friction on air flowing around them. This friction tends to slow the speed of winds, making them far less efficient at dispersing pollutants. On the other hand, air turbulence increases because of the effect of skyscrapers on airflow. Rainfall is also increased in cities. The cause appears to be in part greater turbulence in the urban atmosphere as hot air rises from the built-up surface.

Seventeenth-Century Dutch Agriculture

    Agriculture and fishing formed the primary sector of the economy in the Netherlands in the seventeenth century. Dutch agriculture was modernized and commercialized new crops and agricultural techniques raised levels of production so that they were in line with market demands, and cheap grain was imported annually from the Baltic region in large quantities. According to estimates, about 120,000 tons of imported grain fed about 600,000 people; that is about a third of the Dutch population. Importing the grain, which would have been expensive and time consuming for the Dutch to have produced themselves, kept the price of grain low and thus stimulated individual demand for other foodstuffs and consumer goods.

    Apart from this, being able to give up labor-intensive grain production freed both the land and the workforce for more productive agricultural divisions. The peasants specialized in livestock husbandry and dairy farming as well as in cultivating industrial crops and fodder crops: flax, madder, and rape were grown, as were tobacco, hops, and turnips. These products were bought mostly by urban businesses. There was also a demand among urban consumers for dairy products such as butter and cheese, which, in the sixteenth century, had become more expensive than grain. The high prices encouraged the peasants to improve their animal husbandry techniques; for example, they began feeding their animals indoors in order to raise the milk yield of their cows.

    In addition to dairy farming and cultivating industrial crops, a third sector of the Dutch economy reflected the way in which agriculture was being modernized – horticulture.  In the sixteenth century, fruit and vegetables were to be found only in gardens belonging to wealthy people.  This changed in the early part of the seventeenth century when horticulture became accepted as an agricultural sector. Whole villages began to cultivate fruit and vegetables.  The produce was then transported by water to markets in the cities, where the consumption of fruit and vegetable was no longer restricted to the wealthy.

    As the demand for agricultural produce from both consumers and industry increased, agricultural land became more valuable and people tried to work the available land more intensively and to reclaim more land from wetlands and lakes. In order to increase production on existing land, the peasants made more use of crop rotation and, in particular, began to apply animal waste to the soil regularly, rather than leaving the fertilization process up to the grazing livestock. For the first time, industrial waste, such as ash from the soap-boilers, was collected in the cities and sold in the country as artificial fertilizer. The increased yield and price of land justified reclaiming and draining even more land.

    The Dutch battle against the sea is legendary. Noorderkwaroer in Holland, with its numerous lakes and stretches of water, was particularly suitable for land reclamation and one of the biggest projects undertaken there was the draining of the Beemster lake which began in 1608. The richest merchants in Amsterdam contributed money to reclaim a good 7,100 hectares of land. Forty-three windmills powered the drainage pumps so that they were able to lease the reclamation to farmers as early as 1612, with the investors receiving annual leasing payments at an interest rate of 17 percent. Land reclamation continued, and between 1590 and 1665 almost 100,000 hectares were reclaimed from the wetland areas of Holland, Zeeland, and Friesland. However, land reclamation decreased significantly after the middle of the seventeenth century because the price of agricultural products began to fall, making land reclamation far less profitable in the second part of the century.

    Dutch agriculture was finally affected by the general agricultural crisis in Europe during the last two decades of the seventeenth century. However, what is astonishing about this is not that Dutch agriculture was affected by critical phenomena such as a decrease in sales and production, but the fact that the crisis appeared only relatively late in Dutch agriculture. In Europe as a whole, the exceptional reduction in the population and the related fall in demand for grain since the beginning of the seventeenth century had caused the price of agricultural products to fall. Dutch peasants were able to remain unaffected by this crisis for a long time because they had specialized in dairy farming, industrial crops, and horticulture. However, toward the end of the seventeenth century, they too were overtaken by the general agriculture crisis.

Rock Art of the Australian Aborigines

    Ever since Europeans first explored Australia, people have been trying to understand the ancient rock drawings and carvings created by the Aborigines, the original inhabitants of the continent. Early in the nineteenth century, encounters with Aboriginal rock art tended to be infrequent and open to speculative interpretation, but since the late nineteenth century, awareness of the extent and variety of Australian rock art has been growing. In the latter decades of the twentieth century there were intensified efforts to understand and record the abundance of Australia rock art.

    The systematic study of this art is a relatively new discipline in Australia. Over the past four decades new discoveries have steadily added to the body of knowledge. The most significant data have come from a concentration of three major questions. First, what is the age of Australian rock art? Second, what is its stylistic organization and is it possible to discern a sequence or a pattern of development between styles?

Third, is it possible to interpret accurately the subject matter of ancient rock art, bringing to bear all available archaeological techniques and the knowledge of present-day Aboriginal informants? 

    The age of Australia’s rock art is constantly being revised, and earlier datings have been proposed as the result of new discoveries.  Currently, reliable scientific evidence dates the earliest creation of art on rock surfaces in Australia to somewhere between 30,000 and 50,000 years ago.  This in itself is an almost incomprehensible span of generations, and one that makes Australia’s rock art the oldest continuous art tradition in the world. 

    Although the remarkable antiquity of Australia’s rock art is now established, the sequences and meanings of its images have been widely debated. Since the mid-1970s a reasonably stable picture has formed of the organization of Australian rock art. In order to create a sense of structure to this picture, researchers have relied on a distinction that still underlies the forms of much indigenous visual culture – a distinction between geometric and figurative elements. Simple geometric repeated patterns – circles, concentric circles, and lines – constitute the iconography (characteristic images) of the earliest rock-art sites found across Australia. The frequency with which certain simple motifs appear in these oldest sites has led rock-art researchers to adopt a descriptive term – the Panaramitee style – a label which takes its name from the extensive rock pavements at Panaramitee North in desert South Australia, which are covered with motifs pecked into the surfaces. Certain features of these engravings lead to the conclusion that they are of great age – geological changes had clearly happened after the designs had been made and local Aboriginal informants, when first questioned about them, seemed to know nothing of their origins. Furthermore, the designs were covered with “desert varnish,” a glaze that develops on rock surfaces over thousands of years of exposure to the elements. The simple motifs found at Panaramitee are common to many rock-art sites across Australia. Indeed, sites with engravings of geometric shapes are also to be found on the island of Tasmania, which was separated from the mainland of the continent some 10,000 years ago.

    In the 1970s, when the study of Australian archaeology was in an exciting phase of development, with the great antiquity of rock art becoming clear. Lesley Maynard, the archaeologist who coined the phrase “Panaramitee style,” suggested that a sequence could be determined for Australian rock art in which a geometric style gave way to a simple figurative style (outlines of figures and animals), followed by a range of complex figurative styles that, unlike the pan-Australian geometric tradition tended to much greater regional diversity. While accepting that this sequence fits the archaeological profile of those sites, which were occupied continuously over many thousands of years, a number of writers have warned that the underlying assumption of such a sequence – a development form the simple and the geometric to the complex and naturalistic – obscures the cultural continuities in Aboriginal Australia in which geometric symbolism remains fundamentally important. In this context the simplicity of a geometric motif may be more apparent than real. Motifs of seeming simplicity can encode complex meanings in Aboriginal Australia. And has not twentieth-century art shown that naturalism does not necessarily follow abstraction in some kind of predetermined sequence?

TPO24

Lake Water

Where does the water in a lake come from, and how does water leave it? Water enters a lake from inflowing rivers, from underwater seeps and springs, from overland flow off the surrounding land, and from rain falling directly on the lake surface. Water leaves a lake via outflowing rivers, by soaking into the bed of the lake, and by evaporation. So much is obvious.

The questions become more complicated when actual volumes of water are considered: how much water enters and leaves by each route? Discovering the inputs and outputs of rivers is a matter of measuring the discharges of every inflowing and outflowing stream and river. Then exchanges with the atmosphere are calculated by finding the difference between the gains from rain, as measured (rather roughly) by rain gauges, and the losses by evaporation, measured with models that correct for the other sources of water loss. From the majority of lakes, certainly those surrounded by forests, input from overland flow is too small to have a noticeable effect. Changes in lake level not explained by river flows plus exchanges with the atmosphere must be due to the net difference between what seeps into the lake from the groundwater and what leaks into the groundwater. Note the word “net”: measuring the actual amounts of groundwater seepage into the lake and out of the lake is a much more complicated matter than merely inferring their difference.

Once all this information has been gathered, it becomes possible to judge whether a lake’s flow is mainly due to its surface inputs and outputs or to its underground inputs and outputs.  If the former are greater, the lake is a surface-water-dominated lake; if the latter, it is a seepage-dominated lake.  Occasionally, common sense tells you which of these two possibilities applies.  For example, a pond in hilly country that maintains a steady water level all through a dry summer in spite of having no streams flowing into it must obviously be seepage dominated. Conversely, a pond with a stream flowing in one end and out the other, which dries up when the stream dries up, is clearly surface water dominated. 

By whatever means, a lake is constantly gaining water and losing water: its water does not just sit there, or, anyway, not for long. This raises the matter of a lake’s residence time. The residence time is the average length of time that any particular molecule of water remains in the lake, and it is calculated by dividing the volume of water in the lake by the rate at which water leaves the lake. The residence time is an average; the time spent in the lake by a given molecule (if we could follow its fate) would depend on the route it took: it might flow through as part of the fastest, most direct current, or it might circle in a backwater for an indefinitely long time.

Residence times vary enormously. They range from a few days for small lakes up to several hundred years for large ones; Lake Tahoe, in California, has a residence time of 700 years. The residence times for the Great Lakes of North America, namely, Lakes Superior, Michigan, Huron, Erie and Ontario, are, respectively, 190, 100, 22,

2.5 and 6 years. Lake Erie’s is the lowest: although its area is larger than Lake Ontario’s, its volume is less than one-third as great because it is so shallow – less than 20 meters on average.

A given lake’s residence time is by no means a fixed quantity. It depends on the rate at which water enters the lake, and that depends on the rainfall and the evaporation rate. Climatic change (the result of global warming?) is dramatically affecting the residence times of some lakes in northwestern Ontario, Canada. In the period 1970 to 1986, rainfall in the area decreased from 1,000 millimeters to 650 millimeters per annum, while above-average temperatures speeded up the evapotranspiration rate (the rate at which water is lost to the atmosphere through evaporation and the processes of plant life). The result has been that the residence time of one of the lakes increased from 5 to 18 years during the study period. The slowing down of water renewal leads to a chain of further consequences: it causes dissolved chemicals to become increasingly concentrated, and this, in turn, has a marked effect on all living things in the lake.

Moving into Pueblos

In the Mesa Verde area of the ancient North American Southwest, living patterns changed in the thirteenth century, with large numbers of people moving into large communal dwellings called pueblos, often constructed at the edges of canyons, especially on the sides of cliffs. Abandoning small extended-family households to move into these large pueblos with dozens if not hundreds of other people was probably traumatic. Few of the cultural traditions and rules that today allow us to deal with dense populations existed for these people accustomed to household autonomy and the ability to move around the landscape almost at will.  And besides the awkwardness of having to share walls with neighbors, living in aggregated pueblos introduced other problems.  For people in cliff dwellings, hauling water, wood, and food to their homes was a major chore.  The stress on local resources, especially in the firewood needed for daily cooking and warmth, was particularly intense, and conditions in aggregated pueblos were not very hygienic. 

Given all the disadvantages of living in aggregated towns, why did people in the thirteenth century move into these closely packed quarters? For transitions of such suddenness, archaeologists consider either pull factors (benefits that drew families together) or push factors (some external threat or crisis that forced people to aggregate). In this case, push explanations dominate.

Population growth is considered a particularly influential push. After several generations of population growth, people packed the landscape in densities so high that communal pueblos may have been a necessary outcome. Around Sand Canyon, for example, populations grew from 5 – 12 people per square kilometer in the tenth century to as many as 30 – 50 by the 1200s. As densities increased, domestic architecture became larger, culminating in crowded pueblos. Some scholars expand on this idea by emphasizing a corresponding need for arable land to feed growing numbers of people: construction of small dams, reservoirs, terraces, and field houses indicates that farmers were intensifying their efforts during the 1200s. Competition for good farmland may also have prompted people to bond together to assert rights over the best fields.

Another important push was the onset of the Little Ice Age, a climatic phenomenon that led to cooler temperatures in the Northern Hemisphere. Although the height of the Little Ice Age was still around the corner, some evidence suggests that temperatures were falling during the thirteenth century. The environmental changes associated with this transition are not fully understood, but people living closest to the San Juan Mountains, to the northeast of Mesa Verde, were affected first. Growing food at these elevations is always difficult because of the short growing season. As the Little Ice Age progressed, farmers probably moved their fields to lower elevations, infringing on the lands of other farmers and pushing people together, thus contributing to the aggregations. Archaeologists identify a corresponding shift in populations toward the south and west toward Mesa Verde and away from higher elevations.

In the face of all these pushes, people in the Mesa Verde area had yet another reason to move into communal villages: the need for greater cooperation. Sharing and cooperation were almost certainly part of early Puebloan life, even for people living in largely independent single-household residences scattered across the landscape. Archaeologists find that even the most isolated residences during the eleventh and twelfth centuries obtained some pottery, and probably food, from some distance away, while major ceremonial events were opportunities for sharing food and crafts. Scholars believe that this cooperation allowed people to contend with a patchy environment in which precipitation and other resources varied across the landscape: if you produce a lot of food one year, you might trade it for pottery made by a distant ally who is having difficulty with crops – and the next year, the flow of goods might go in the opposite direction. But all of this appears to have changed in the thirteenth century. Although the climate remained as unpredictable as ever between one year and the next, it became much less locally diverse. In a bad year for farming, everyone was equally affected. No longer was it helpful to share widely. Instead, the most sensible thing would be for neighbors to combine efforts to produce as much food as possible, and thus aggregated towns were a sensible arrangement.

Breathing During Sleep

Of all the physiological differences in human sleep compared with wakefulness that have been discovered in the last decade, changes in respiratory control are most dramatic. Not only are there differences in the level of the functioning of respiratory systems, there are even changes in how they function. Movements of the rib cage for breathing are reduced during sleep, making the contractions of the diaphragm more important.  Yet because of the physics of lying down, the stomach applies weight against the diaphragm and makes it more difficult for the diaphragm to do its job.  However, there are many other changes that affect respiration when asleep.

During wakefulness, breathing is controlled by two interacting systems.  The first is an automatic, metabolic system whose control is centered in the brain stem. It subconsciously adjusts breathing rate and depth in order to regulate the levels of carbon dioxide (CO2) and oxygen (O2), and the acid-base ratio in the blood. The second system is the voluntary, behavioral system. Its control center is based in the forebrain, and it regulates breathing for use in speech, singing, signing, and so on. It is capable of ignoring or overriding the automatic, metabolic system and produces an irregular pattern of breathing.

During NREM (the phrase of sleep in which there is no rapid eye movement) breathing becomes deeper and more regular, but there is also a decrease in the breathing rate, resulting in less air being exchanged overall. This occurs because during NREM sleep the automatic, metabolic system has exclusive control over breathing and the body uses less oxygen and produces less carbon dioxide. Also, during sleep the automatic metabolic system is less responsive to carbon dioxide levels and oxygen levels in the blood. Two things result from these changes in breathing control that occur during sleep. First, there may be a brief cessation or reduction of breathing when falling asleep as the sleeper waxes and wanes between sleep and wakefulness and their differing control mechanisms. Second, once sleep is fully obtained, there is an increase of carbon dioxide and a decrease of oxygen in the blood that persists during NREM.

But that is not all that changes. During all phrases of sleep, several changes in the air passages have been observed. It takes twice as much effort to breathe during sleep because of greater resistance to airflow in the airways and changes in the efficiency of the muscles used for breathing. Some of the muscles that help keep the upper airway open when breathing tend to become more relaxed during sleep especially during REM ( the phrase of sleep in which there is rapid eye movement). Without this muscular action, inhaling is like sucking air out of a balloon – the narrow passages tend to collapse. Also there is a regular cycle of change in resistance between the two sides of the nose. If something blocks the “good” side, such as congestion from allergies or a cold, then resistance increases dramatically. Coupled with these factors is the loss of the complex interactions among the muscles that can change the route of airflow from nose to mouth.

Other respiratory regulating mechanisms apparently cease functioning during sleep. For example, during wakefulness there is an immediate, automatic, adaptive increase in breathing effort when inhaling is made more difficult (such as breathing through a restrictive face mask). This reflexive adjustment is totally absent during NREM sleep. Only after several inadequate breaths under such conditions, resulting in the considerable elevation of carbon dioxide and reduction of oxygen in the blood, is breathing effort adjusted. Finally, the coughing reflex in reaction to irritants in the airway produces not a cough during sleep but a cessation of breathing. If the irritation is severe enough, a sleeping person will arouse, clear the airway, then resume breathing and likely return to sleep.

Additional breathing changes occur during REM sleep that even more dramatic than the changes that occur during NREM. The amount of air exchanged is even lower in REM than NREM because, although breathing is more rapid in REM, it is also more irregular, with brief episodes of shallow breathing or absence of breathing. In addition, breathing during REM depends much more on the action of the diaphragm and much less on rib cage action.

TPO25

The Surface of Mars

    The surface of Mars shows a wide range of geologic features, including huge volcanoes – the largest known in the solar system – and extensive impact cratering. Three very large volcanoes are found on the Tharsis bulge, an enormous geologic area near Mars’s equator. Northwest of Tharsis is the largest volcano of all: Olympus Mons, with a height of 25 kilometers and measuring some 700 kilometers in diameter at its base. The three large volcanoes on the Tharsis bulge are a little smaller – a “mere” 18 kilometers high.

    None of these volcanoes was formed as a result of collisions between plates of the Martian crust – there is no plate motion on Mars. Instead, they are shield volcanoes – volcanoes with broad, sloping sides formed by molten rock. All four show distinctive lava channels and other flow features similar to those found on shield volcanoes on Earth. Images of the Martian surface reveal many hundreds of volcanoes. Most of the largest volcanoes are associated with the Tharsis bulge, but many smaller ones are found in the northern plains.

    The great height of Martian volcanoes is a direct consequence of the planet’s low surface gravity. As lava flows and spreads to form a shield volcano, the volcano’s eventual height depends on the new mountain’s ability to support its own weight. The lower the gravity, the lesser the weight and the greater the height of the mountain. It is no accident that Maxwell Mons on Venus and the Hawaiian shield volcanoes on Earth rise to about the same height (about 10 kilometers) above their respective bases – Earth and Venus have similar surface gravity. Mars’s surface gravity is only 40 percent that of Earth, so volcanoes rise roughly 2.5 times as high. Are the Martian shield volcanoes still active? Scientists have no direct evidence for recent or ongoing eruptions, but if these volcanoes were active as recently as 100 million years ago (an estimate of the time of last eruption based on the extent of impact cratering on their slopes), some of them may still be at least intermittently active. Millions of years, though, may pass between eruptions.

    Another prominent feature of Mars’s surface is cratering. The Mariner spacecraft found that the surface of Mars, as well as that of its two moons, is pitted with impact craters formed by meteoroids falling in from space. As on our Moon, the smaller craters are often filled with surface matter – mostly dust – confirming that Mars is a dry desert world. However, Martian craters get filled in considerably faster than their lunar counterparts. On the Moon, ancient craters less than 100 meters across (corresponding to depths of about 20 meters) have been obliterated, primarily by meteoritic erosion. On Mars, there are relatively few craters less than about 5 kilometers in diameter. The Martian atmosphere is an efficient erosive agent, with Martian winds transporting dust from place to place and erasing surface features much faster than meteoritic impacts alone can obliterate them.

    As on the Moon, the extent of large impact cratering (i.e., craters too big to have been filled in by erosion since they were formed) serves as an age indicator for the Martian surface. Age estimates ranging from four billion years for Mars’s southern highlands to a few hundred million years in the youngest volcanic areas were obtained in this way.

    The detailed appearance of Martian impact craters provides an important piece of information about conditions just below the planet’s surface. Martian craters are surrounded by ejecta (debris formed as a result of an impact) that looks quite different from its lunar counterparts. A comparison of the Copernicus crater on the Moon with the (fairly typical) crater Yuty on Mars demonstrates the differences. The ejecta surrounding the lunar crater is just what one would expect from an explosion ejecting a large volume of dust, soil, and boulders. However, the ejecta on Mars gives the distinct impression of a liquid that has splashed or flowed out of the crater. Geologists think that this fluidized ejecta crater indicates that a layer of permafrost, or water ice, lies just a few meters under the surface. Explosive impacts heated and liquefied the ice, resulting in the fluid appearance of the ejecta. 

The Decline of Venetian Shipping

    In the late thirteenth century, northern Italian cities such as Genoa, Florence, and Venice began an economic resurgence that made them into the most important economic centers of Europe. By the seventeenth century, however, other European powers had taken over, as the Italian cities lost much of their economic might.

    This decline can be seen clearly in the changes that affected Venetian shipping and trade. First, Venice’s intermediary functions in the Adriatic Sea, where it had dominated the business of shipping for other parties, were lost to direct trading. In the fifteenth century there was little problem recruiting sailors to row the galleys (large ships propelled by oars); guilds (business associations) were required to provide rowers, and through a draft system free citizens served compulsorily when called for. In the early sixteenth century the shortage of rowers was not serious because the demand for galleys was limited by a move to round ships (round-hulled ships with more cargo space), which required fewer rowers. But the shortage of crews proved to be a greater and greater problem, despite continuous appeal to Venice’s tradition of maritime greatness. Even though sailors’ wages doubled among the northern Italian cities from 1550 to 1590, this did not elicit an increased supply. 

  The problem in shipping extended to the Arsenale, Venice’s huge and powerful shipyard. Timber ran short, and it was necessary to procure it from farther and farther away. In ancient Roman times, the Italian peninsula had great forests of fir preferred for warships, but scarcity was apparent as early as the early fourteenth century. Arsenale officers first brought timber from the foothills of the Alps, then from north toward Trieste, and finally from across the Adriatic. Private shipbuilders were required to buy their oak abroad. As the costs of shipbuilding rose, Venice clung to its outdated standards while the Dutch were innovating in lighter and more easily handled ships.

    The step from buying foreign timber to buying foreign ships was regarded as a short one, especially when complaints were heard in the latter sixteenth century that the standards and traditions of the Arsenale were funning down. Work was stretched out and done poorly. Older workers had been allowed to stop work a half hour before the regular time, and in 1601 younger workers left with them. Merchants complained that the privileges reserved for Venetian-built and –owned ships were first extended to those Venetians who bought ships from abroad and then to foreign-built and –owned vessels. Historian Frederic Lane observes that after the loss of ships in battle in the late sixteenth century, the shipbuilding industry no longer had the capacity to recover that it had displayed at the start of the century.

    The conventional explanation for the loss of Venetian dominance in trade is the establishment of the Portuguese direct sea route to the East, replacing the overland Silk Road from the Black Sea and the highly profitable Indian Ocean-caravan-eastern Mediterranean route to Venice. The Portuguese Vasco da Gama’s voyage around southern Africa to India took place at the end of the fifteenth century, and by 1502 the trans-Arabian caravan route had been cut off by political unrest.

    The Venetian Council finally allowed round ships to enter the trade that was previously reserved for merchant galleys, thus reducing transport costs by one third. Prices of spices delivered by ship from the eastern Mediterranean came to equal those of spices transported by Portuguese vessels, but the increase in quantity with both routes in operation drove the price far down. Gradually, Venice’s role as a storage and distribution center for spices and silk, dyes, cotton, and gold decayed, and by the early seventeenth century Venice had lost its monopoly in markets such as France and southern Germany.

    Venetian shipping had started to decline from about 1530 – before the entry into the Mediterranean of large volumes of Dutch and British shipping – and was clearly outclassed by the end of the century. A contemporary of Shakespeare (1564 – 1616) observed that the productivity of Italian shipping had declined, compared with that of the British, because of conservatism and loss of expertise. Moreover, Italian sailors were deserting and emigrating, and captains, no longer recruited from the ranks of nobles, were weak on navigation.

The Evolutionary Origin of Plants

The evolutionary history of plants has been marked by a series of adaptations. The ancestors of plants were photosynthetic single-celled organisms probably similar to today’s algae. Like modern algae, the organisms that gave rise to plants presumably lacked true roots, stems, leaves, and complex reproductive structures such as flowers. All of these features appeared later in the evolutionary history of plants. Of today’s different groups of algae, green algae are probably the most similar to ancestral plants. This supposition stems from the close phylogenetic (natural evolutionary) relationship between the two groups. DNA comparisons have shown that green algae are plants’ closest living relatives. In addition, other lines of evidence support the hypothesis that land plants evolved from ancestral e green algae: green algae used the same type of chlorophyll and accessory pigments in photosynthesis as do land plants. This would not be true of red or brown algae. Green algae store food as starch, as do land plants and have cell walls made of cellulose, similar in composition to those of land plants. Again, the food storage and cell wall molecules of red and brown algae are different.

    Today green algae live mainly in freshwater, suggesting that their early evolutionary history may have occurred in freshwater habitats. If so, the green algae would have been subjected to environmental pressures that resulted in adaptations that enhanced their potential to give rise to land-dwelling organisms.

    The environmental conditions of freshwater habitats, unlike those of ocean habitats, are highly variable. Water temperature can fluctuate seasonally or even daily, and changing levels of rainfall can lead to fluctuations in the concentration of chemicals in the water or even to periods in which the aquatic habitat dries up. Ancient freshwater green algae must have evolved features that enabled them to withstand extremes of temperature and periods of dryness.  These adaptations served their descendants well as they invaded land.

    The terrestrial world is green now, but it did not start out that way. When plants first made the transition ashore more than 400 million years ago, the land was barren and desolate, inhospitable to life. From a plant’s evolutionary viewpoint, however, it was also a land of opportunity, free of competitors and predators and full of carbon dioxide and sunlight (the raw materials for photosynthesis, which are present in far higher concentrations in air than in water). So once natural selection had shaped the adaptations that helped plants overcome the obstacles to terrestrial living, plants prospered and diversified.

    When plants pioneered the land, they faced a range of challenges posed by terrestrial environments. On land, the supportive buoyancy of water is missing, the plant is no longer bathed in a nutrient solution, and the air tends to dry things out. These conditions favored the evolution of structures that support the body, vessels that transport water and nutrients to all parts of the plant, and structures that conserve water. The resulting adaptations to dry land include some structural features that arose early in plant evolution; now these features are common to virtually all land plants. They include roots or rootlike structures, a waxy cuticle that covers the surfaces of leaves and stems and limits the evaporation of water, and pores called stomata in leaves and stems that allow gas exchange but close when water is scarce, thus reducing water loss. Other adaptations occurred later in the transition to terrestrial life and are now widespread but not universal among plants. These include conducting vessels that transport water and minerals upward from the roots and that move photosynthetic products from the leaves to the rest of the plant body and the stiffening substance lignin, which supports the plant body, helping it expose maximum surface area to sunlight.

   These adaptations allowed an increasing diversity of plant forms to exploit dry land. Life on land, however, also required new methods of transporting sperm to eggs. Unlike aquatic and marine forms, land plants cannot always rely on water currents to carry their sex cells and disperse their fertilized eggs. So the most successful groups of land plants are those that evolved methods of fertilized sex cell dispersal that are independent of water and structures that protect developing embryos from drying out. Protected embryos and waterless dispersal of sex cells were achieved with the origin of seed plants and the key evolutionary innovations that they introduced: pollen, seeds, and, later, flowers and fruits.

TPO 26

Energy and the Industrial Revolution

    For years historians have sought to identify crucial elements in the eighteenth-century rise in industry, technology, and economic power known as the Industrial Revolution, and many give prominence to the problem of energy. Until the eighteenth century, people relied on energy derived from plants as well as animal and human muscle to provide power. Increased efficiency in the use of water and wind helped with such tasks as pumping, milling, or sailing. However, by the eighteenth century, Great Britain in particular was experiencing an energy shortage. Wood, the primary source of heat for homes and industries and also used in the iron industry as processed charcoal, was diminishing in supply. Great Britain had large amounts of coal; however, there were not yet efficient means by which to produce mechanical energy or to power machinery. This was to occur with progress in the development of the steam engine.

    In the late 1700s James Watt designed an efficient and commercially viable steam engine that was soon applied to a variety of industrial uses as it became cheaper to use. The engine helped solve the problem of draining coal mines of groundwater and increased the production of coal needed to power steam engines elsewhere. A rotary engine attached to the steam engine enabled shafts to be turned and machines to be driven, resulting in mills using steam power to spin and weave cotton. Since the steam engine was fired by coals, the large mills did not need to be located by rivers, as had mills that used water-driven machines. The shift to increased mechanization in cotton production is apparent in the import of raw cotton and the sale of cotton goods. Between 1760 and 1850, the amount of raw cotton imported increased 230 times.

Production of British cotton goods increased sixtyfold, and cotton cloth became Great Britain’s most important product, accounting for one-half of all exports. The success of the steam engine resulted in increased demands for coal, and the consequent increase in coal production was made possible as the steam-powered pumps drained water from the ever-deeper coal seams found below the water table.

    The availability of steam power and the demands for new machines facilitated the transformation of the iron industry. Charcoal, made from wood and thus in limited supply, was replaced with coal-derived coke (substance left after coal is heated) as steam-driven bellows came into use for producing of raw iron. Impurities were burnt away with the use of coke, producing a high-quality refined iron. Reduced cost was also instrumental in developing steam-powered rolling mills capable of producing finished iron of various shapes and sizes. The resulting boom in the iron industry expanded the annual iron output by more than 170 times between 1740 and 1840, and by the 1850s Great Britain was producing more tons of iron than the rest of the world combined. The developments in the iron industry were in part a response to the demand for more machines and the ever-widening use of higher-quality iron in other industries.

    Steam power and iron combined to revolutionize transport, which in turn had further implications. Improvements in road construction and sailing had occurred, but shipping heavy freight over land remained expensive, even with the use of rivers and canals wherever possible. Parallel rails had long been used in mining operations to move bigger loads, but horses were still the primary source of power.  However, the arrival of the steam engine initialed a complete transformation in rail transportation, entrenching and expanding the Industrial Revolution.  As transportation improved, distant and larger markets within the nation could be reached, thereby encouraging the development of larger factories to keep pace with increasing sales.  Greater productivity and rising demands provided entrepreneurs with profits that could be reinvested to take advantage of new technologies to further expand capacity, or to seek alternative investment opportunities.  Also, the availability of jobs in railway construction attracted many rural laborers accustomed to seasonal and temporary employment. When the work was completed, many moved to other construction jobs or to factory work in cities and towns, where they became part of an expanding working class.

Survival of Plants and Animals in Desert Conditions

    The harsh conditions in deserts are intolerable for most plants and animals. Despite these conditions, however, many varieties of plants and animals have adapted to deserts in a number of ways. Most plant tissues die if their water content falls too low: the nutrients that feed plants are transmitted by water; water is a raw material in the vital process of photosynthesis; and water regulates the temperature of a plant by its ability to absorb heat and because water vapor lost to the atmosphere through the leaves helps to lower plant temperatures.  Water controls the volume of plant matter produced.  The distribution of plants within different areas of desert is also controlled by water.  Some areas, because of their soil texture, topographical position, or distance from rivers or groundwater, have virtually no water available to plants, whereas other do. 

    The nature of plant life in deserts is also highly dependent on the fact that they have to adapt to the prevailing aridity. There are two general classes of vegetation: long-lived perennials, which may be succulent (water-storing) and are often dwarfed and woody; and annuals or ephemerals, which have a short lift cycle and may form a fairly dense stand immediately after rain.

    The ephemeral plants evade drought. Given a year of favorable precipitation, such plants will develop vigorously and produce large numbers of flowers and fruit. This replenishes the seed content of the desert soil. The seeds then lie dormant until the next wet year, when the desert blooms again.

    The perennial vegetation adjusts to the aridity by means of various avoidance mechanisms. Most desert plants are probably best classified as xerophytes. They possess drought-resisting adaptations: loss of water through the leaves is reduced by means of dense hairs covering waxy leaf surfaces, by the closure of pores during the hottest times to reduce water loss, and by the rolling up or shedding of leaves at the beginning of the dry season. Some xerophytes, the succulents (including cacti), store water in their structures. Another way of countering drought is to have a limited amount of mass above ground and to have extensive root networks below ground. It is not unusual for the roots of some desert perennials to extend downward more than ten meters. Some plants are woody in type – an adaptation designed to prevent collapse of the plant tissue when water stress produces wilting. Another class of desert plant is the phreatophyte. These have adapted to the environment by the development of long taproots that penetrate downward until they approach the assured water supply provided by groundwater. Among these plants are the date palm, tamarisk, and mesquite. They commonly grow near stream channels, springs, or on the margins of lakes.

    Animals also have to adapt to desert conditions, and they may do it through two forms of behavioral adaptation: they either escape or retreat. Escape involves such actions as aestivation, a condition of prolonged dormancy, or torpor, during which animals reduce their metabolic rate and body temperature during the hot season or during very dry spells.

    Seasonal migration is another form of escape, especially for large mammals or birds. The term retreat is applied to the short-term escape behavior of desert animals, and it usually assumes the patter of a daily rhythm. Birds shelter in nests, rock overhangs, trees, and dense shrubs to avoid the hottest hours of the day, while mammals like the kangaroo rt burrow underground.

    Some animals have behavioral, physiological, and morphological (structural) adaptations that enable them to withstand extreme conditions. For example, the ostrich has plumage that is so constructed that the feathers are long but not too dense. When conditions are hot, the ostrich erects them on its back, thus increasing the thickness of the barrier between solar radiation and the skin. The sparse distribution of the feathers, however, also allows considerable lateral air movement over the skin surface, thereby permitting further heat loss by convection. Furthermore, the birds orient themselves carefully with regard to the Sun and gently flap their wings to increase convection cooling.

Sumer and the First Cities of the Ancient Near East

    The earliest of the city states of the ancient Near East appeared at the southern end of the Mesopotamia plain, the area between the Tigris and Euphrates rives in what is now Iraq. It was here that the civilization known as Sumer emerged in its earliest form in the fifth millennium. At first sight, the plain did not appear to be a likely home for a civilization There were few natural resources, no timber, stone, or metals. Rainfall was limited, and what water there was rushed across the plain in the annual flood of melted snow. As the plain fell only 20 meters in 500 kilometers, the beds of the rivers shifted constantly. It was this that made the organization of irrigation, particularly the building of canals to channel and preserve the water, essential. Once this was done and the silt carried down by the rivers was planted, the rewards were rich: four to five times what rain-fed earth would produce. It was these conditions that allowed an elite to emerge, probably as an organizing class, and to sustain itself through the control of surplus crops.

    It is difficult to isolate the factors that led to the next development – the emergence of urban settlements. The earliest, that of Eridu, about 4500 B.C.E., and Uruk, a thousand years later, center on impressive temple complexes built of mud brick. In some way, the elite had associated themselves with the power of the gods. Uruk, for instance, had two patron gods – Anu, the god of the sky and sovereign of all other gods, and Inanna, a goddess of love and war – and there were others, patrons of different cities. Human beings were at their mercy. The biblical story of the Flood may originate in Sumer. In the earliest version, the gods destroy the human race because its clamor had been so disturbing to them.

    It used to be believed that before 3000 B.C.E. the political and economic life of the cities was centered on their temples, but it now seems probable that the cities had secular rulers from earliest times.  Within the city lived administrators, craftspeople, and merchants. (Trading was important, as so many raw materials, the semiprecious stones for the decoration of the temples, timbers for roofs, and all metals, had to be imported.)  An increasingly sophisticated system of administration led in about 3300 B.C.E. to the appearance of writing.  The earliest script was based on logograms, with a symbol being used to express a whole word.  The logograms were incised on damp clay tablets with a stylus with a wedge shape at its end. (The Romans called the shape cuneus and this gives the script its name of cuneiform.) Two thousand logograms have been recorded from these early centuries of writing. A more economical approach was to use a sign to express not a whole word but a single syllable. (To take an example: the Sumerian word for “head” was “sag.” Whenever a word including a syllable in which the sound “sag” was to be written, the sign for “sag” could be used to express that syllable with the remaining syllables of the word expressed by other signs.) By 2300 B.C.E. the number of signs required had been reduced to 600, and the range of words that could be expressed had widened. Texts dealing with economic matters predominated, as they always had done; but at this point works of theology, literature, history, and law also appeared.

    Other innovations of the late fourth millennium include the wheel, probably developed first as a more efficient way of making pottery and then transferred to transport. A tablet engraved about 3000 B.C.E. provides the earliest known example from Sumer, a roofed boxlike sledge mounted on four solid wheels. A major development was the discovery, again about 3000 B.C.E., that if copper, which had been known in Mesopotamia since about 3500 B.C.E., was mixed with tin, a much harder metal, bronze would result. Although copper and stone tools continued to be used, bronze was far more successful in creating sharp edges that could be used as anything from saws and scythes to weapons. The period from 3000 to 1000 B.C.E., when the use of bronze became widespread, is normally referred to as the Bronze Age.

 

TPO 27

 

Crafts in the Ancient Near East

     Some of the earliest human civilizations arose in southern Mesopotamia, in what is now southern Iraq, in the fourth millennium B.C.E. In the second half of that millennium, in the south around the city of Uruk, there was an enormous escalation in the area occupied by permanent settlements. A large part of that increase took place in Uruk itself, which became a real urban center surrounded by a set of secondary settlements. While population estimates are notoriously unreliable, scholars assume that Uruk inhabitants were able to support themselves from the agricultural production of the fields surrounding the city, which could be reached with a daily commute. But Uruk’s dominant size in the entire region, far surpassing that of other settlements, indicates that it was a regional center and a true city. Indeed, it was the first city in human history.

     The vast majority of its population remained active in agriculture, even those people living within the city itself. But a small segment of the urban society started to specialize in nonagricultural tasks as a result of the city’s role as a regional center. Within the productive sector, there was a growth of a variety of specialist craftspeople. Early in the Uruk period, the use of undecorated utilitarian pottery was probably the result of specialized mass production. In an early fourth-millennium level of the Eanna archaeological site at Uruk, a pottery style appears that is most characteristic of this process, the so-called beveled-rim bowl. It is a rather shallow bowl that was crudely made in a mold; hence, in only a limited number of standard sizes. For some unknown reason, many were discarded, often still intact, and thousands have been found all over the Near East. The beveled-rim bowl is one of the most telling diagnostic finds for identifying an Uruk-period site. Of importance is the fact that it was produced rapidly in large amounts, most likely by specialists in a central location.

     A variety of documentation indicates that certain goods, once made by a family member as one of many duties, were later made by skilled artisans. Certain images depict groups of people, most likely women, involved in weaving textiles, an activity we know from later third-millennium texts to have been vital in the economy and to have been centrally administered. Also, a specialized metal-producing workshop may have been excavated in a small area at Uruk. It contained a number of channels lined by a sequence of holes, about 50 centimeters deep, all showing burn marks and filled with ashes. This has been interpreted as the remains of a workshop where molten metal was scooped up from the channel and poured into molds in the holes. Some type of mass production by specialists was involved here.

     Objects themselves suggest that they were the work of skilled professionals. In the late Uruk period (3500 – 3100 B.C.E.), there first appeared a type of object that remained characteristic for Mesopotamia throughout its entire history: the cylinder seal. This was a small cylinder, usually no more than 3 centimeters high and 2 centimeters in diameter, of shell, bone, faience (a glassy type of stoneware), or various types of stones, on which a scene was carved into the surface. When rolled over a soft material—primarily the clay of bullae (round seals), tablets, or clay lumps attached to boxes, jars, or door bolts—the scene would appear in relief, easily legible. The technological knowledge needed to carve it was far superior to that for stamp seals, which had happened in the early Neolithic period (approximately 10,000 – 5000 B.C.E.). From the first appearance of cylinder seals, the carved scenes could be highly elaborate and refined, indicating the work of specialist stone-cutters. Similarly, the late Uruk period shows the first monumental art, relief, and statuary in the round, made with a degree of mastery that only a professional could have produced.

The Formation of Volcanic Islands

     Earth’s surface is not made up of a single sheet of rock that forms a crust but rather a number of “tectonic plates” that fit closely, like the pieces of giant jigsaw puzzle. Some plates carry islands or continents; others form the seafloor. All are slowly moving because the plates float on a denser semiliquid mantle, the layer between the crust and Earth’s core. The plates have edges that are spreading ridges (where two plates are moving apart and new seafloor is being created), subduction zones (where two plates collide and one plunges beneath the other), or transform faults (where two plates neither converge nor diverge but merely move past one another). It is at the boundaries between plates that most of Earth’s volcanism and earthquake activity occur.

     Generally speaking, the interiors of plates are geologically uneventful. However, there are exceptions. A glance at a map of the Pacific Ocean reveals that there are many islands far out at sea that are actually volcanoes—many no longer active, some overgrown with coral—that originated from activity at points in the interior of the Pacific Plate that forms the Pacific seafloor.

     How can volcanic activity occur so far from a plate boundary? The Hawaiian Islands provide a very instructive answer.  Like many other island groups, they form a chain.  The Hawaiian Island Chain extends northwest from the island of Hawaii.  In the 1840s American geologist James Daly observed that the different Hawaiian Islands seem to share a similar geologic evolution but are progressively more eroded, and therefore probably older, toward the northwest.  Then in 1963, in the early days of the development of the theory of plate tectonics, Canadian geophysicist Tuzo Wilson realized that this age progression could result if the islands were formed on a surface plate moving over a fixed volcanic source in the interior. Wilson suggested that the long chain of volcanoes stretching northwest from Hawaii is simply the surface expression of a long-lived volcanic source located beneath the tectonic plate in the mantle. Today’s most northwestern island would have been the first to form. Then, as the plate moved slowly northwest, new volcanic islands would have formed as the plate moved over the volcanic source. The most recent island, Hawaii, would be at the end of the chain and is now over the volcanic source.

    Although this idea was not immediately accepted, the dating of lavas in the Hawaiian (and other) chains showed that their ages increase away from the presently active volcano, just as Daly had suggested. Wilson’s analysis of these data is now a central part of plate tectonics. Most volcanoes that occur in the interiors of plates are believed to be produced by mantle plumes, columns of molten rock that rise from deep within the mantle. A volcano remains an active “hot spot” as long as it is over the plume. The plumes apparently originate at great depths, perhaps as deep as the boundary between the core and the mantle, and many have been active for a very long time. The oldest volcanoes in the Hawaiian hot-spot trail have ages close to 80 million years. Other islands, including Tahiti and Easter Island in the Pacific, Reunion and Mauritius in the Indian Ocean, and indeed most of the large islands in the world’s oceans, owe their existence to mantle plumes.

     The oceanic volcanic islands and their hot-spot trails are thus especially useful for geologists because they record the past locations of the plate over a fixed source. They therefore permit the reconstruction of the process of seafloor spreading, and consequently of the geography of continents and of ocean basins in the past. For example, given the current position of the Pacific Plate, Hawaii is above the Pacific Ocean hot spot. So the position of the Pacific Plate 50 million years ago can be determined by moving it such that a 50-million-year-old volcano in the hot-spot trail sits at the location of Hawaii today. However, because the ocean basins really are short-lived features on geologic time scales, reconstructing the world’s geography by backtracking along the hot-spot trail works only for the last 5 percent or so of geologic time.

Predator-Prey Cycles

     How do predators affect populations of the prey animals? The answer is not as simple as might be thought. Moose reached Isle Royale in Lake Superior by crossing over winter ice and multiplied freely there in isolation without predators. When wolves later reached the island, naturalists widely assumed that the wolves would play a key role in controlling the moose population. Careful studies have demonstrated, however, that this is not the case. The wolves eat mostly old or diseased animals that would not survive long anyway. In general, the moose population is controlled by food availability, disease, and other factors rather than by the wolves.

     When experimental populations are set up under simple laboratory conditions, the predator often exterminates its prey and then becomes extinct itself, having nothing left to eat. However, if safe areas like those prey animals have in the wild are provided, the prey population drop to low levels but not to extinction. Low prey population levels then provide inadequate food for the predators, causing the predator population to decrease. When this occurs, the prey population can rebound. In this situation the predator and prey populations may continue in this cyclical pattern for some time.

     Population cycles are characteristic of some species of small mammals, and they sometimes appear to be brought about by predators. Ecologists studying hare populations have found that the North American snowshoe hare follows a roughly ten-year cycle. Its numbers fall tenfold to thirtyfold in a typical cycle, and a hundredfold change can occur. Two factors appear to be generating the cycle: food plants and predators.

     The preferred foods of snowshoe hares are willow and birch twigs. As hare density increases, the quantity of these twigs decreases, forcing the hares to feed on low-quality, high-fiber food. Lower birth rates, low juvenile survivorship, and low growth rates follow, so there is a corresponding decline in hare abundance. Once the hare population has declined, it takes two to three years for the quantity of twigs to recover.

     A key predator of the snowshoe hare is the Canada lynx. The Canada lynx shows a ten-year cycle of abundance that parallels the abundance cycle of hares. As hare numbers increase, lynx numbers do too, rising in response to the increased availability of lynx food. When hare numbers fall, so do lynx numbers, as their food supply is depleted.

     What causes the predator-prey oscillations? Do increasing numbers of hares lead to overharvesting of plants, which in turn results in reduced hare populations, or do increasing numbers of lynx lead to overharvesting of hares? Field experiments carried out by Charles Krebs and coworkers in 1992 provide an answer. Krebs investigated experimental plots in Canada’s Yukon territory that contained hare populations. When food was added to these plots (no food effect) and predators were excluded (no predator effect) from an experimental area, hare numbers increased tenfold and stayed there—the cycle was lost. However, the cycle was retained if either of the factors was allowed to operate alone: if predators were excluded but food was not added (food effect alone), or if food was added in the presence of predators (predator effect alone). Thus, both factors can affect the cycle, which, in practice, seems to be generated by the conjunction of the two factors.

     Predators are an essential factor in maintaining communities that are rich and diverse in species. Without predators, the species that is the best competitor for food, shelter, nesting sites, and other environmental resources tends to dominate and exclude the species with which it competes.  This phenomenon is known as “competitor exclusion.”  However, if the community contains a predator of the strongest competitor species, then the population of that competitor is controlled.  Thus even the less competitive species are able to survive.  For example, sea stars prey on a variety of bivalve mollusks and prevent these bivalves from monopolizing habitats on the sea floor. This opens up space for many other organisms. When sea stars are removed, species diversity falls sharply. Therefore, from the standpoint of diversity, it is usually a mistake to eliminate a major predator from a community.

TPO 28

Groundwater

    Most of the world’s potable water – freshwater suitable for drinking – is accounted for by groundwater, which is stored in the pores and fractures in rocks. There is more than 50 times as much freshwater stored underground than in all the freshwater rivers and lakes at the surface. Nearly 50 percent of all groundwater is stored in the upper 1,000 meters of Earth. At greater depth within Earth, the pressure of the overlying rock causes pores and cracks to close, reducing the space that pore water can occupy, and almost complete closure occurs at a depth of about 10 kilometers. The greatest water storage, therefore, lies near the surface.

    Aquifers, Porosity, and Permeability

    Groundwater is stored in a variety of rocks types. A groundwater reservoir from which water can be extracted is called an aquifer. We can effectively think of an aquifer as a deposit of water. Extraction of water depends on two properties of the aquifer porosity and permeability. Between sediment grains are spaces that can be filled with water. This pore space is known as porosity and is expressed as a percentage of the total rock volume. Porosity is important for water-storage capacity, but for water to flow through rocks, the pore spaces must be connected. The ability of water, or other fluids, to flow through the interconnected pore spaces in rocks is termed permeability. Fractures and joints have very high permeability. In the intergranular spaces of rocks, however, fluid must flow around and between grains in a tortuous path; this winding path causes a resistance to flow. The rate at which the flowing water overcomes this resistance is related to the permeability of rock.

sediment: materials (such as sand or small rocks) that are deposited by water, wind, or glacial ice.

    Sediment sorting and compaction influence permeability and porosity. The more poorly sorted or the more tightly compacted a sediment is, the lower its porosity and permeability. Sedimentary rocks – the most common rock type near the surface – are also the most common reservoirs for water because they contain the most space that can be filled with water. Sandstones generally make good aquifers, while finer-grained mudstones are typically impermeable. Impermeable rocks are referred to as aquicludes. Igneous and metamorphic rocks are more compact, commonly crystalline, and rarely contain spaces between grains. However, even igneous and metamorphic rocks may act as groundwater reservoirs if extensive fracturing occurs in such rocks and if the fracture system is interconnected.

    The Water Table

    The water table is the underground boundary below which all the cracks and pores are filled with water. In some cases, the water table reaches Earth’s surface, where it is expressed as rivers, lakes, and marshes.  Typically, though, the water table may be tens or hundreds of meters below the surface.  The water table is not flat but usually follows the contours of the topography.  Above the water table is the vadose zone, through which rainwater percolates.  Water in the vadose zone drains down to the water table, leaving behind a thin coating of water on mineral grains. The vadose zone supplies plant roots near the surface with water.

topography: the shape or a surface such as Earth’s, including the rise and fall of such features as mountains and valleys

    Because the surface of the water table is not flat but instead rises and falls with topography, groundwater is affected by gravity in the same fashion as surface water. Groundwater flows downhill to topographic lows. If the water table intersects the land surface, groundwater will flow out onto the surface at springs, either to be collected there or to subsequently flow farther along a drainage. Groundwater commonly collects in stream drainages but may remain entirely beneath the surface of dry stream-beds in arid regions. In particularly wet years, short stretches of an otherwise dry stream-bed may have flowing water because the water table rises to intersect the land surface.

 

Early Saharan Pastoralists

    The Sahara is a highly diverse, albeit dry, region that has undergone major climatic changes since 10,000 B.C. As recently as 6000 B.C., the southern frontier of the desert was far to the north of where it is now, while semiarid grassland and shallow freshwater lakes covered much of what are now arid plains. This was a landscape where antelope of all kinds abounded – along with Bos primigenius, a kind of oxen that has become extinct. The areas that are now desert were, like all arid regions, very susceptible to cycles of higher and lower levels of rainfall, resulting in major, sudden changes in distributions of plants and animals. The people who hunted the sparse desert animals responded to drought by managing the wild resources they hunted and gathered, especially wild oxen, which had to have regular water supplies to survive.

    Even before the drought, the Sahara was never well watered. Both humans and animals were constantly on the move, in search of food and reliable water supplies. Under these circumstances, archaeologist Andrew Smith believes, the small herds of Bos primigenius in the desert became smaller, more closely knit breeding units as the drought took hold. The beasts were more disciplined, so that it was easier for hunters to predict their habits, and capture animals at will. At the same time, both cattle and humans were more confined in their movements, staying much closer to permanent water supplies for long periods of time. As a result, cattle and humans came into close association.

    Smith believes that the hunters were well aware of the more disciplined ways in which their prey behaved.  Instead of following the cattle on their annual migrations, the hunters began to prevent the herd from moving from one spot to another.  At first, they controlled the movement of the herd while ensuring continuance of their meat diet.  But soon they also gained genetic control of the animals, which led to rapid physical changes in the herd.  South African farmers who maintain herds of wild eland (large African antelopes with short, twisted horns) report that the offspring soon diminish in size, unless wild bulls are introduced constantly from outside. The same effects of inbreeding may have occurred in controlled cattle populations, with some additional, and perhaps unrecognized, advantages. The newly domesticated animals behaved better, were easier to control, and may have enjoyed a higher birth rate, which in turn yielded greater milk supplies. We know from rock paintings deep in the Sahara that the herders were soon selecting breeding animals to produce offspring with different horn shapes and hide colors.

    It is still unclear whether domesticated cattle were tamed independently in northern Africa or introduced to the continent from Southwest Asia. Whatever the source of the original tamed herds might have been, it seems entirely likely that much the same process of juxtaposition (living side by side) and control occurred in both Southwest Asia and northern Africa, and even in Europe, among peoples who had an intimate knowledge of the behavior of wild cattle. The experiments with domestication probably occurred in many places, as people living in ever-drier environments cast around for more predictable food supplies.

    The cattle herders had only a few possessions: unsophisticated pots and polished adzes. They also hunted with bow and arrow. The Saharan people left a remarkable record of their lives painted on the walls of caves deep in the desert. Their artistic endeavors have been preserved in paintings of wild animals, cattle, goats, humans, and scenes of daily life that extend back perhaps to 5000 B.C. The widespread distribution of pastoral sites of this period suggests that the Saharans ranged their herds over widely separated summer and winter grazing grounds.

adzes: cutting tools with blades set at right angles to the handle

    About 3500 B.C., climatic conditions again deteriorated. The Sahara slowly became drier and lakes vanished. On the other hand, rainfall increased in the interior of western Africa, and the northern limit of the tsetse fly, an insect fatal to cattle, moved south. So the herders shifted south, following the major river systems into savanna regions. By this time, the Saharan people were probably using domestic crops, experimenting with such summer rainfall crops as sorghum and millet as they moved out of areas where they could grow wheat, barley, and other Mediterranean crops.

Buck Rubs and Buck Scrapes

    A conspicuous sign indicating the presence of white-tailed deer in a woodlot is a buck rub.  A male deer makes a buck rub by stripping the bark (outer layer) of a small tree with its antlers.  When completed, the buck rub is an obvious visual signal to us and presumably to other deer in the area.  A rub is usually located at the shoulder height of a deer (one meter or less about the ground) on a smooth-barked, small-diameter (16 – 25 millimeters) tree.  The smooth bark of small red maples makes this species ideals for buck rubs in the forests of the mid-eastern United States.

    Adult male deer usually produce rubs in late summer or early autumn when the outer velvet layer is being shed from their antlers. Rubs are created about one to two months before the breeding season (the rut). Hence for a long time biologists believed that male deer used buck rubs not only to clean and polish antlers but also to provide practice for the ensuing male-to-male combat during the rut. However, biologists also noted that deer sniff and lick an unfamiliar rub, which suggests that this visual mark on a small tree plays an important communication purpose in the social life of deer.

    Buck rubs also have a scent produced by lands in the foreheads of deer that is transferred to the tree when the rub is made. These odors make buck rubs an important means of olfactory communication between deer. The importance of olfactory communication (using odors to communicate) in the way of life of deer was documented by a study of captive adult mule deer a few decades ago, which noted that males rubbed their foreheads on branches and twigs, especially as autumn approached. A decade later another study reported that adult male white-tailed deer exhibited forehead rubbing just before and during the rut. It was found that when a white-tailed buck makes a rub, it moves both antlers and forehead glands along the small tree in a vertical direction. This forehead rubbing behavior coincides with a high level of glandular activity in the modified scent glands found on the foreheads of male deer; the glandular activity causes the forehead pelage (hairy covering) of adult males to be distinctly darker than in females or younger males.

    Forehead rubbing by male deer on buck rubs presumably sends a great deal of information to other members of the same species. First, the chemicals deposited on the rub provide information on the individual identity of an animal; no two mammals produce the same scent. For instance, as we all know, dogs recognize each other via smell. Second, because only male deer rub, the buck rub and its associated chemicals indicate the sex of the deer producing the rub. Third, older, more dominant bucks produce more buck rubs and probably deposit more glandular secretions on a given rub. Thus, the presence of many well-marked rubs is indicative of older, higher-status males being in the general vicinity rather than simply being a measure of relative deer abundance in a given area. The information conveyed by the olfactory signals on a buck rub make it the social equivalent of some auditory signals in other deer species, such as trumpeting by bull elk.

    Because both sexes of whitetails respond to buck rubs by smelling and licking them, rubs may serve a very important additional function. Fresher buck rubs (less than two days old), in particular, are visited more frequently by adult females than older rubs. In view of this behavior it has been suggested that chemicals present in fresh buck rubs may help physiologically induce and synchronize fertility in females that visit these rubs. This would be an obvious advantage to wide-ranging deer, especially to a socially dominant buck when courting several adult females during the autumn rut.

    Another visual signal produced by while-tailed deer is termed a buck scrape. Scrapes consist of a clearing (about 0.5 meter in diameter) and shallow depression made by pushing aside the leaves covering the ground; after making the scrape, the deer typically urinates in the depression. Thus, like a buck rub, a scrape is both a visual and an olfactory signal. Buck scrapes are generally created after leaf-fall in autumn, which is just before or during the rut. Scrapes are usually placed in open or conspicuous places, such as along a deer trail. Most are made by older males, although females and younger males (2.5 years old or less) occasionally make scrapes.

TPO 29

Characteristics of Roman Pottery

    The pottery of ancient Romans is remarkable in several ways. The high quality of Roman pottery is very easy to appreciate when handling actual pieces of tableware or indeed kitchenware and amphorae (the large jars used throughout the Mediterranean for the transport and storage of liquids, such as wine and oil). However, it is impossible to do justice to Roman wares on the page, even when words can be backed up by photographs and drawings. Most Roman pottery is light and smooth to the touch and very tough, although, like all pottery, it shatters if dropped on a hard surface. It is generally made with carefully selected and purified clay, worded to thin-walled and standardized shapes on a fast wheel and fired in a kiln (pottery oven) capable of ensuring a consistent finish. With handmade pottery, inevitably there are slight differences between individual vessels of the same design and occasional minor blemishes (flaws). But what strikes the eye and the touch most immediately and most powerfully with Roman pottery is its consistent high quality.

    This is not just an aesthetic consideration but also a practical one. These vessels are solid (brittle, but not fragile), they are pleasant and easy to handle (being light and smooth), and, with their hard and sometimes glossy (smooth and shiny) surfaces, they hold liquids well and are easy to wash. Furthermore, their regular and standardized shapes would have made them simple to stack and store. When people today are shown a very ordinary Roman pot and, in particular, are allowed to handle it, they often comment on how modern it looks and feels, and they need to be convinced of is true age.

    As impressive as the quality of Roman pottery is its sheer massive quantity.

When considering quantities, we would ideally like to have some estimates for overall production from particular sites of pottery manufacture and for overall consumption at specific settlements. Unfortunately, it is in the nature of the archaeological evidence, which is almost invariably only a sample of what once existed, that such figures will always be elusive. However, no one who has ever worked in the field would question the abundance of Roman pottery, particularly in the Mediterranean region. This abundance is notable in Roman settlements (especially urban sites) where the labor that archaeologists have to put into the washing and sorting of potsherds (fragments of pottery) constitutes a high proportion of the total work during the initial phases of excavation.

     Only rarely can we derive any “real” quantities from deposits of broken pots.  However, there is one exceptional dump, which does represent a very large part of the site’s total history of consumption and for which an estimate of quantity has been produced.  On the left bank of the Tiber River in Rome, by one of the river ports of the ancient city, is a substantial hill some 50 meters high called Monte Testaccio.  It is made up entirely of broken oil amphorae, mainly of the second and third centuries A.D. It has been estimated that Monte Testaccio contains the remains of some 53 million amphorae, in which around 6,000 million liters of oil were imported into the city from overseas. Imports inot imperial Rome were supported by the full might of the state and were therefore quite exceptional – but the size of the operations at Monte Testaccio, and the productivity and complexity that lay behind them, nonetheless cannot fail to impress. This was a society with similarities to modern ones – moving goods on a gigantic scale, manufacturing high-quality containers to do so, and occasionally, as here, even discarding them on delivery.

    Roman pottery was transported not only in large quantities but also over substantial distances. Many Roman pots, in particular amphorae and the fine wares designed for use at tables, could travel hundreds of miles – all over the Mediterranean and also further afield. But maps that show the various spots where Roman pottery of a particular type has been found tell only part of the story. What is more significant than any geographical spread is the access that different levels of society had to good-quality products. In all but the remotest regions of the empire, Roman pottery of a high standard is common at the sites of humble villages and isolated farmsteads.

Competition

    When several individuals of the same species or of several different species depend on the same limited resource, a situation may arise that is referred to as competition. The existence of competition has been long known to naturalists; its effects were described by Darwin in considerable detail. Competition among individuals of the same species (intraspecies competition), one of the major mechanisms of natural selection, is the concern of evolutionary biology. Competition among the individuals of different species (interspecies competition) is a major concern of ecology. It is one of the factors controlling the size of competing populations, and in extreme cases it may lead to the extinction of one of the competing species. This was described by Darwin for indigenous New Zealand species of animals and plants, which died out when competing species from Europe were introduced.

    No serious competition exists when the major needed resource is in superabundant supply, as in most cases of the coexistence of herbivores (plant eaters). Furthermore, most species do not depend entirely on a single resource. If the major resource for a species becomes scarce, the species can usually shift to alternative resources. If more than one species is competing for a scarce resource, the competing species usually switch to different alternative resources. Competition is usually most severe among close relatives with similar demands on the environment. But it may also occur among totally unrelated forms that compete for the same resource, such as seed-eating rodents and ants. The effects of such competition are graphically demonstrated when all the animals or all the plants in an ecosystem come into competition, as happened 2 million years ago at the end of the Pliocene, when North and South America became joined by the Isthmus of Panama. North and South American species migrating across the Isthmus now came into competition with each other. The result was the extermination of a large fraction of the South American mammals, which were apparently unable to withstand the competition from invading North American species – although added predation was also an important factor.

    To what extent competition determines the composition of a community and the density of particular species has been the source of considerable controversy. The problem is that competition ordinarily cannot be observed directly but must be inferred from the spread or increase of one species and the concurrent reduction or disappearance of another species. The Russian biologist G. F. Gause performed numerous two-species experiments in the laboratory, in which one of the species became extinct when only a single kind of resource was available. On the basis of these experiments and of field observations, the so-called law of competitive exclusion was formulated, according to which no two species can occupy the same niche. Numerous seeming exceptions to this law have since been found, but they can usually be explained as cases in which the two species, even though competing for a major joint resource, did not really occupy exactly the same niche.

    Competition among species is of considerable evolutionary importance. The physical structure of species competing for resources in the same ecological niche tends to gradually evolve in ways that allow them to occupy different niches. Competing species also tend to change their range so that their territories no longer overlap. The evolutionary effect of competition on species has been referred to as “species selection;” however, this description is potentially misleading. Only the individuals of a species are subject to the pressures of natural selection. The effect on the well-being and existence of a species is just the result of the effects of selection on all the individuals of the species. Thus species selection is actually a result of individual selection.

    Competition may occur for any needed resource.  In the case of animals it is usually food; in the case of forest plants it may be light; in the case of substrate inhabitants it may be space, as in many shallow-water bottom-dwelling marine organisms.  Indeed, it may be for any of the factors, physical as well as biotic, that are essential for organisms.  Competition is usually the more severe the denser the population.  Together with predation, it is the most important density-dependent factor in regulating population growth.

The History of Waterpower

    Moving water was one of the earliest energy sources to be harnessed to reduce the workload of people and animals. No one knows exactly when the waterwheel was invented, but irrigation systems existed at least 5,000 years ago, and it seems probable that the earliest waterpower device was the noria, a waterwheel that raised water for irrigation in attached jars. This device appears to have evolved no later than the fifth century B.C., perhaps independently in different regions of the Middle and Far East.

    The earliest waterpower mills were probably vertical-axis mills for grinding corn, known as Norse or Greek mills, which seem to have appeared during the first or second century B.C. in the Middle East and a few centuries later in Scandinavia. In the following centuries, increasingly sophisticated water power mills were built throughout the Roman Empire and beyond its boundaries in the Middle East and northern Europe. In England, the Saxons are thought to have used both horizontal- and vertical-axis wheels. The first documented English mill was in the eighth century, but three centuries later about 5,000 were recorded, suggesting that every settlement of any size had its mill.

    Raising water and grinding corn were by no means the only uses of waterpower mill, and during the following centuries, the applications of waterpower kept pace with the developing technologies of mining, iron working, paper making, and the wool and cotton industries. Water was the main source of mechanical power, and by the end of the seventeenth century, England alone is though to have had some 20,000 working mills.

    There was much debate on the relative efficiencies of different types of waterwheels.  The period from about 1650 until 1800 saw some excellent scientific and technical investigations of different designs.  They revealed output powers ranging from about 1 horsepower to perhaps 60 for the largest wheels and confirmed that for maximum efficiency, the water should pass across the blades as smoothly as possible and fall away with minimum speed, having given up almost all of its kinetic energy.  (They also proved that, in principle, the overshot wheel, a type of wheel in which an overhead stream of water powers the wheel, should win the efficiency competition.) 

    But then steam power entered the scene, putting the whole future of waterpower in doubt. An energy analyst writing in the year 1800 would have painted a very pessimistic picture of the future for waterpower. The coal-fired steam engine was taking over, and the waterwheel was fast becoming obsolete. However, like many later experts, this one would have suffered from an inability to see into the future. A century later the picture was completely different by then, the world had an electric industry, and a quarter of its generating capacity was water powered.

    The growth of the electric-power industry was the result of a remarkable series of scientific discoveries and developments in electrotechnology during the nineteenth century, but significant changes in what we might now call hydro (water) technology also played their part. In 1832, the year of Michael Faraday’s discovery that a changing magnetic field produces and electric field, a young French engineer patented a new and more efficient waterwheel. His name was Benoit Fourneyron, and his device was the first successful water turbine. (The word turbine comes from the Latin turbo: something that spins). The waterwheel, unaltered for nearly 2,000 years, had finally been superseded.

    Half a century of development was needed before Faraday’s discoveries in electricity were translated into full-scale power stations. In 1881 the Godalming power station in Surrey, England, on the banks of the Wey River, created the world’s first public electricity supply. The power source of this most modern technology was a traditional waterwheel. Unfortunately this early plant experienced the problem common to many forms of renewable energy: the flow in the Wey River was unreliable, and the waterwheel was soon replaced by a steam engine.

    From this primitive start, the electric industry grew during the final 20 years of the nineteenth century at a rate seldom if ever exceeded by any technology. The capacity of individual power stations, many of them hydro plants, rose from a few kilowatts to over a megawatt in less than a decade.

TPO30

Role of Play in Development

     Play in easier to define with examples than with concepts. In any case, in animals it consists of leaping, running, climbing, throwing, wrestling, and other movements, either alone, with objects, or with other animals. Depending on the species, play may be primarily for social interaction, exercise, or exploration. One of the problems in providing a clear definition of play is that it involves the same behaviors that take place in other circumstances – dominance, predation, competition, and real fighting. Thus, whether play occurs or not depends on the intention of the animal, and intentions are not always clear from behavior alone.

     Play appears to be a developmental characteristic of animals with fairly sophisticated nervous systems, mainly birds and mammals. Play has been studied most extensively in primates and canids (dogs). Exactly why animals play is still a matter debated in the research literature, and the reasons may not be the same for every species that plays. Determining the functions of play is difficult because the functions may be long-term, with beneficial effects not showing up until the animal’s adulthood.

     Play is not without considerable costs to the individual animal. Play is usually very active, involving movement in space and, at times, noisemaking. Therefore, it results in the loss of fuel or energy that might better be used for growth or for building up fat stores in a young animal. Another potential cost of this activity is greater exposure to predators since play is attention-getting behavior. Greater activity also increases the risk of injury in slipping or falling.

     The benefits of play must outweigh the costs, or play would not have evolved, according to Darwin’s theory. Some of the potential benefits relate directly to the healthy development of the brain and nervous system. In one research study, two groups of young rats were raised under different conditions. One group developed in an “enriched” environment, which allowed the rats to interact with other rats, play with toys, and receive maze training. The other group lived in an “impoverished” environment in individual cages in a dimly lit room with little stimulation. At the end of the experiments, the results showed that the actual weight of the brains of the impoverished rats was less than that of those raised in the enriched environment (though they were fed the same diets). Other studies have shown that greater stimulation not only affects the size of the brain but also increases the number of connections between the nerve cells. Thus, active play may provide necessary stimulation to the growth of synaptic connections in the brain, especially the cerebellum, which is responsible for motor functioning and movements.

     Play also stimulates the development of the muscle tissues themselves and may provide the opportunity to practice those movements needed for survival. Prey species, like young deer or goats, for example, typically play by performing sudden flight movements and turns, whereas predator species, such as cats, practice stalking, pouncing, and biting.

     Play allows a young animal to explore its environment and practice skills in comparative safety since the surrounding adults generally do not expect the young to deal with threats or predators. Play can also provide practice in social behaviors needed for courtship and mating. Learning appropriate social behaviors is especially important in species that live in groups, like young monkeys that need to learn to control selfishness and aggression and to understand the give-and-take involved in social groups. They need to learn how to be dominant and submissive because each monkey might have to play either role in the future. Most of these things are learned in the long developmental periods that primates have, during which they engage in countless play experiences with their peers.

     There is a danger, of course, that play may be misinterpreted or not recognized as play by others, potentially leading to aggression.  This is especially true when play consists of practicing normal aggressive or predatory behaviors.  Thus, many species have evolved clear signals to delineate playfulness.  Dogs, for example, will wag their tails, get down on their front legs, and stick their behinds in the air to indicate “what follows is just for play.” 

The pace of Evolutionary Change

     A heated debate has enlivened recent studies of evolution. Darwin’s original thesis, and the viewpoint supported by evolutionary gradualists, is that species change continuously but slowly and in small increments. Such changes are all but invisible over the short time scale of modern observations, and, it is argued, they are usually obscured by innumerable gaps in the imperfect fossil record. Gradualism, with its stress on the slow pace of change, is a comforting position, repeated over and over again in generations of textbooks. By the early twentieth century, the question about the rate of evolution had been answered in favor of gradualism to most biologists’ satisfaction.

     Sometimes a closed question must be reopened as new evidence or new arguments based on old evidence come to light. In 1972 paleontologists Stephen Jay Gould and Niles Eldredge challenged conventional wisdom with an opposing viewpoint, the punctuated equilibrium hypothesis, which posits that species give rise to new species in relatively sudden bursts, without a lengthy transition period. These episodes of rapid evolution are separated by relatively long static spans during which a species may hardly change at all.

     The punctuated equilibrium hypothesis attempts to explain a curious feature of the fossil record – one that has been familiar to paleontologists for more than a century but has usually been ignored. Many species appear to remain unchanged in the fossil record for millions of years – a situation that seems to be at odds with Darwin’s model of continuous change. Intermediate fossil forms, predicted by gradualism, are typically lacking. In most localities a given species of clam or coral persists essentially unchanged throughout a thick formation of rock, only to be replaced suddenly by a new and different species.

     The evolution of North American horses, which was once presented as a classic textbook example of gradual evolution, is now providing equally compelling evidence for punctuated equilibrium. A convincing 50-million-year sequence of modern horse ancestors – each slightly larger, with more complex teeth, a longer face, and a more prominent central toe – seemed to provide strong support for Darwin’s contention that species evolve gradually. But close examination of those fossil deposits now reveals a somewhat different story. Horses evolved in discrete steps, each of which persisted almost unchanged for millions of years and was eventually replaced by a distinctive newer model. The four-toed Eohippus preceded the three-toed Miohippus, for example, but North American fossil evidence suggests a jerky, uneven transition between the two. If evolution had been a continuous, gradual process, one might expect that almost every fossil specimen would be slightly different from every other.

     If it seems difficult to conceive how major changes could occur rapidly, consider this: an alternation of a single gene in flies is enough to turn a normally fly with a single pair of wings into one that has two pairs of wings.

     The question about the rate of evolution must now be turned around: does evolution ever proceed gradually, or does it always occur in short bursts? Detailed field studies of thick rock formations containing fossils provide the best potential tests of the competing theories.

     Occasionally, a sequence of fossil-rich layers of rock permits a comprehensive look at one type of organism over a long period of time. For example, Peter Sheldon’s studies of trilobites, a new extinct marine animal with a segmented body, offer a detailed glimpse into three million years of evolution in one marine environment. In that study, each of eight different trilobites species was observed to undergo a gradual change in the number of segments – typically an increase of one or two segments over the whole time interval. No significant discontinuities were observed, leading Sheldon to conclude that environmental conditions were quite stable during the period he examined.

      Similar exhaustive studies are required for many different kinds of organisms from many different periods.  Most researchers expect to find that both modes of transition from one species to another are at work in evolution. Slow, continuous change may be the norm during periods of environmental stability, while rapid evolution of new species occurs during periods of environmental stress.  But a lot more studies like Sheldon’s are needed before we can say for sure.

           

The Invention of the Mechanical Clock

     In Europe, before the introduction of the mechanical clock, people told time by sun (using, for example, shadow sticks or sun dials) and water clocks. Sun clocks worked, of course, only on clear days, water clocks misbehaved when the temperature fell toward freezing, to say nothing of long-run drift as the result of sedimentation and clogging. Both these devices worked well in sunny climates, but in northern Europe the sun may be hidden by clouds for weeks at a time, while temperatures vary not only seasonally but from day to night.

     Medieval Europe gave new importance to reliable time. The Catholic Church had its seven daily prayers, one of which was at night, requiring an alarm arrangement to waken monks before dawn. And then the new cities and towns, squeezed by their walls, had to know and order time in order to organize collective activity and ration space. They set a time to go to work, to open the market, to close the market, to leave work, and finally a time to put out fires and to go to sleep. All this was compatible with older devices so long as there was only one authoritative timekeeper, but with urban growth and the multiplication of time signals, discrepancy brought discord and strife. Society needed a more dependable instrument of time measurement and found it in the mechanical clock.

     We do not know who invented this machine, or where. It seems to have appeared in Italy and England (perhaps simultaneous invention) between 1275 and 1300. Once known, it spread rapidly, driving out water clocks but not solar dials, which were needed to check the new machines against the timekeeper of last resort. These early versions were rudimentary, inaccurate, and prone to breakdown.

     Ironically, the new machine tended to undermine Catholic Church

authority. Although church ritual had sustained an interest in timekeeping throughout the centuries of urban collapse that followed the fall of Rome, church time was nature’s time.  Day and night were divided into the same number of parts, so that except at the equinoxes, day and night hours were unequal, and then of course the length of these hours varied with the seasons. But the mechanical clock kept equal hours, and this implied a new time reckoning.  The Catholic Church resisted, not coming over to the new hours for about a century.  From the start, however, the towns and cities took equal hours as their standard, and the public clocks installed in town halls and market squares became the very symbol of a new, secular municipal authority. Every town wanted one, conquerors sized them as especially precious spoils of war, tourists came to see and hear these machines the way they made pilgrimages to scared relics.

     The clock was the greatest achievement of medieval mechanical ingenuity. Its general accuracy could be checked against easily observed phenomena, like the rising and setting of the sun. The result was relentless pressure to improve technique and design. At every stage, clockmakers led the way to accuracy and precision, they became masters of miniaturization, detectors and correctors of error, searchers for new and better. They were thus the pioneers of mechanical engineering and served as examples and teachers to other branches of engineering.

     The clock brought order and control, both collective and personal. Its public display and private possession laid the basis for temporal autonomy, people could now coordinate comings and goings without dictation from above. The clock provided the punctuation marks for group activity, while enabling individuals to order their own work (and that of others) so as to enhance productivity. Indeed, the very notion of productivity is a by-product of the clock: once one can relate performance to uniform time unites, work is never the same. One moves from the task-oriented time consciousness of the peasant (working one job after another, as time and light permit) and the time-filling busyness of the domestic servant (who always had something to do) to an effort to maximize product per unit of time.

TPO 31

Speciation in Geographically Isolated Populations

Evolutionary biologists believe that speciation, the formation of a new species, often begins when some kind of physical barrier arises and divides a population of a single species into separate subpopulations. Physical separation between subpopulations promotes the formation of new species because once the members of one subpopulation can no longer mate with members of another subpopulation, they cannot exchange variant genes that arise in one of the subpopulations. In the absence of gene flow between the subpopulations, genetic differences between the groups begin to accumulate. Eventually the subpopulations become so genetically distinct that they cannot interbreed even if the physical barriers between them were removed. At this point the subpopulations have evolved into distinct species. This route to speciation is known as allopatry (“allo-” means “different”, and “patria” means “homeland”).

Allopatric speciation may be the main speciation route. This should not be surprising, since allopatry is pretty common. In general, the subpopulations of most species are separated from each other by some measurable distance. So even under normal situations the gene flow among the subpopulations is more of an intermittent trickle than a steady stream. In addition, barriers can rapidly arise and shut off the trickle. For example, in the 1800s a monstrous earthquake changed the course of the Mississippi River, a large river flowing in the central part of the United States of America. The change separated populations of insects now living along opposite shores, completely cutting off gene flow between them.

Geographic isolation can also proceed slowly, over great spans of time. We find evidence of such extended events in the fossil record, which affords glimpse into the breakup of formerly continuous environments. For example, during past ice ages, glaciers advanced down through North America and Europe and gradually cut off parts of populations from one another. When the glaciers retreated, the separated populations of plants and animals came into contact again. Some groups that had descended from the same parent population were no longer reproductively compatible – they had evolved into separate species. In other groups, however, genetic divergences had not proceeded so far, and the descendants could still interbreed – for them, reproductive isolation was not completed, and so speciation had not occurred.

Allopatric speciation can also be brought by the imperceptibly slow but colossal movements of the tectonic plates that make up Earth’s surface. ■ About 5 million years ago such geologic movements created the land bridge between North America and South America that we call the Isthmus of Panama. ■ While previously the gap between the continents had allowed a free flow of water, now the isthmus presented a barrier that divided the Atlantic Ocean from the Pacific Ocean. ■ This division set the stage for allopatric speciation among populations of fishes and other marine species. ■

In the 1980s, John Graves studied two populations of closely related fishes, one population from the Atlantic side of isthmus, the other from the Pacific side. He compared four enzymes found in the muscles of each population. Graves found that all four Pacific enzymes function better at lower temperatures than the four Atlantic versions of the same enzymes. This is significant because Pacific seawater if typically 2 to 3 degrees cooler than seawater on the Atlantic side of isthmus. Analysis by gel electrophoresis revealed slight differences in amino acid sequence of the enzymes of two of the four pairs. This is significant because the amino acid sequence of an enzyme is determined by genes.

Graves drew two conclusions from these observations. First, at least some of the observed differences between the enzymes of the Atlantic and Pacific fish populations were not random but were the result of evolutionary adaptation. Second, it appears that closely related populations of fishes on both sides of the isthmus are starting to genetically diverge from each other. Because Graves’ study of geographically isolated populations of isthmus fishes offers a glimpse of the beginning of a process of gradual accumulation of mutations that are neutral or adaptive, divergences here might be evidence of allopatric speciation in process.

Early Childhood Education

Preschools – educational programs for children under the age of five – differ significantly from one country to another according to the views that different societies hold regarding the purpose of early childhood education. For instance, in a cross-country comparison of preschools in China, Japan, and the United States, researchers found that parents in the three countries view the purpose of preschools very differently. Whereas parents in China tend to see preschools primarily as a way of giving children a good start academically, Japanese parents view them primarily as a way of giving children the opportunity to be members of a group. In the United States, in comparison, parents regard the primary purpose of preschools as making children more independent and self-reliant, although obtaining a good academic start and having group experience are also important.

While many programs designed for preschoolers focus primarily on social and emotional factors, some are geared mainly toward promoting cognitive gains and preparing preschoolers for the formal instruction they will experience when they start kindergarten. In the United States, the best-known program designed to promote future academic success is Head Start. Established in the 1960s when the United States declared the War on Poverty, the program has served over 13 million children and their families. The program, which stresses parental involvement, was designed to serve the “whole child”, including children’s physical health, self-confidence, social responsibility, and social and emotional development.

Whether Head Start is seen as successful or not depends on the lens through which one is looking. If, for instance, the program is expected to provide long-term increases in IQ (intelligence quotient) scores, it is a disappointment. Although graduates of Head Start programs tend to show immediate IQ gains, these increases do not last. On the other hand, it is clear that Head Start is meeting its goal of getting preschoolers ready for school. Preschoolers who participate in Head Start are better prepared for future schooling than those who do not. Furthermore, graduates of Head Start programs have better future school grade. Finally, some research suggests that ultimately Head Start graduates show higher academic performance at the end of high school, although the gains are modest.

In addition, results from other types of preschool readiness programs indicate that those who participate and graduate are less likely to repeat grades, and they are more likely to complete school than readiness program, for every dollar spent on the program, taxpayers saved seven dollars by the time the graduates reached the age of 27.

The most recent comprehensive evaluation of early intervention programs suggests that, taken as a group, preschool programs can provide significant benefits, and that government funds invested early in life may ultimately lead to a reduction in future costs. For instance, compared with children who did not participate in early intervention programs, participants in various programs showed gains in emotional or cognitive development, better educational outcomes, increased economic self-sufficiency, reduced levels of criminal activity, and improved health-related behaviors. Of course, not every program produced all these benefits, and not every child benefited to the same extent. Furthermore, some researchers argue that less-expensive programs are just as good as relatively expensive ones, such as Head Start. Still, the results of the evaluation were promising, suggesting that the potential benefits of early intervention can be substantial.

Not everyone agrees that programs that seek to enhance academic skills during the preschool years are a good thing. In fact, according to developmental psychologist David Elkind, United States society tends to push children so rapidly that they begin to feel stress and pressure at a young age. Elkind argues that academic success is largely dependent upon factors out of parents’ control, such as inherited abilities and a child’s rate of maturation. Consequently, children of a particular age cannot be expected to master educational material without taking into account their current level of cognitive development. In short, children require development appropriate educational practice, which is education that is based on both typical development and the unique characteristics of a given child.

Savanna Formation

Located in tropical areas at low altitudes, savannas are stable ecosystems, some wet and some dry consisting of vast grasslands with scattered trees and shrubs. They occur on a wide range of soil types and in extremes of climate. There is no simple or single factor that determines if a given site will be a savanna, but some factors seem to play important roles in their formation.

Savannas typically experience a rather prolonged dry season. One theory behind savanna formation is that wet forest species are unable to withstand the dry season, and thus savanna, rather than rain forest, is favored on the site. Savannas experience an annual rainfall of between 1,000 and 2,000 millimeters, most of it falling in a five- to eight-month wet season. Though plenty of rain may fall on a savanna during the year, for at least part of the year little does, creating the drought stress ultimately favoring grasses. Such conditions prevail throughout much of northern South America and Cuba, but many Central American savannas as well as coastal areas of Brazil and the island of Trinidad do not fit this pattern. In these areas, rainfall per month exceeds that in the above definition, so other factors must contribute to savanna formation.

In many characteristics, savanna soils are similar to those of some rain forests, though more extreme. For example, savanna soils, like many rain forest soils, are typically oxisols (dominated by certain oxide minerals) and ultisols (soils containing no calcium carbonate), with a high acidity and notably low concentrations of such minerals as phosphorus, calcium, magnesium, and potassium, while aluminum levels are high. Some savannas occur on wet, waterlogged soils; others on dry, sandy, well-drained soils. This may seem contradictory, but it only means that extreme soil conditions, either too wet or too dry for forests, are satisfactory for savannas. More moderate conditions support moist forests.

Waterlogged soils occur in areas that are flat or have poor drainage. These soils usually contain large amounts of clay and easily become water saturated. Air cannot penetrate between the soil particles, making the soil oxygen-poor. By contrast, dry soils are sandy and porous, their coarse textures permitting water to drain rapidly. Sandy soils are prone to the leaching of nutrients and minerals and so tend to be nutritionally poor. Though most savannas are found on sites with poor soils (because of either moisture conditions or nutrient levels of both), poor soils can and do support lush rain forest.

Most savannas probably experience mild fires frequently and major burns every two years or so. Many savanna and dry-forest plant species are called pyrophytes, meaning they are adapted in various ways to withstand occasional burning. Frequent fire is a factor to which rain forest species seem unable to adapt, although ancient charcoal remains from Amazon forest soils dating prior to the arrival of humans suggest that moist forests also occasionally burn. Experiments suggest that if fire did not occur in savannas in the Americas, species composition would change significantly. When burning occurs, it prevents competition among plant species from progressing to the point where some species exclude others, reducing the overall diversity of the ecosystem. But in experimental areas protected from fire, a few perennial grass species eventually come to dominate, outcompeting all others. ■ Evidence from other studies suggests that exclusion of fire results in markedly decreased plant-species richness, often with an increase in tree density. ■ There is generally little doubt that fire is a significant factor in maintaining savanna, certainly in most regions.

■ On certain sites, particularly in South America, savanna formation seems related to frequent cutting and burning of moist forests for pastureland. ■ Increase in pastureland and subsequent overgrazing have resulted in an expansion of savanna. The thin thin upper layer of humus (decayed organic matter) is destroyed by cutting and burning. Humus is necessary for rapid decomposition of leaves by bacteria and fungi and for recycling by surface roots. Once the humus layer disappears, nutrients cannot be recycled and leach from the soil, converting soil from fertile to infertile and making it suitable only for savanna vegetation. Forests on white, sandy soil are most susceptible to permanent alteration.

TPO 32

Plant Colonization

Colonization is one way in which plants can change the ecology of a site. Colonization is a process with two components: invasion and survival. The rate at which a site is colonized by plants depends on both the rate at which individual organisms (seeds, spores, immature or mature individuals) arrive at the site and their success at becoming established and surviving. Success in colonization depends to a great extent on there being a site available for colonization – a safe site where disturbance by fire or by cutting down of trees has either removed competing species or reduced levels of competition and other negative interactions to a level at which the invading species can become established. For a given rate of invasion, colonization of a moist, fertile site is likely to be much more rapid than that of a dry, infertile site because of poor survival on the latter. A fertile, plowed field is rapidly invaded by a large variety of weeds, whereas a neighboring construction site from which the soil has been compacted or removed to expose a coarse, infertile parent material may remain virtually free of vegetation for many months or even years despite receiving the same input of seeds as the plowed field.

Both the rate of invasion and the rate of extinction vary greatly among different plant species. Pioneer species – those that occur only in the earliest stages of colonization – tend to have high rates of invasion because they produce very large numbers of reproductive propagules (seeds, spores, and so on) and because they have an efficient means of dispersal (normally, wind)

If colonizers produce short-lived reproductive propagules, then they must produce very large numbers unless they have an efficient means of dispersal to suitable new habitats. Many plants depend on wind for dispersal and produce abundant quantities of small, relatively short-lived seeds to compensate for the fact that wind is not always a reliable means of reaching the appropriate type of habitat. Alternative strategies have evolved in some plants, such as those that produce fewer but larger seeds that are dispersed to suitable sites by birds or small mammals or those that produce long-lived seeds. Many forest plants seem to exhibit the latter adaptation, and viable seeds of pioneer species can be found in large numbers on some forest floors. For example, as many as 1,125 viable seeds per square meter were found in a 100-year-old Douglas fir/western hemlock forest in coastal British Columbia. Nearly all the seeds that had germinated from this seed bank were from pioneer species. The rapid colonization of such sites after disturbance is undoubtedly in part a reflection of the large seed bank on the forest floor.

An adaptation that is well developed in colonizing species is a high degree of variation in germination (the beginning of a seed’s growth). Seeds of a given species exhibit a wide range of germination dates, increasing the probability that at least some of the seeds will germinate during a period of favorable environmental conditions. This is particularly important for species that colonize an environment where there is no existing vegetation to ameliorate climatic extremes and in which there may be great climatic diversity.

Species succession in plant communities, i.e., the temporal sequence of appearance and disappearance of species is dependent on events occurring at different stages in the life history of a species. ■ Variation in rates of invasion and growth plays an important role in determining patterns of succession, especially secondary succession. ■ The species that are first to colonize a site are those that produce abundant seed that is distributed successfully to new sites. ■ Such species generally grow rapidly and quickly dominate new sites, excluding other species with lower invasion and growth rates. The first community that occupies a disturbed area therefore may be composed of species with the highest rate of invasion, whereas the community of the subsequent stage may consist of plants with similar survival rates but lower invasion rates. ■

Siam, 1851 – 1910

In the late nineteenth century, political and social changes were occurring rapidly in Siam (now Thailand). The old ruling families were being displaced by an evolving centralized government. These families were pensioned off (given a sum of money to live on) or simply had their revenues taken away or restricted; their sons were enticed away to schools for district officers, later to be posted in some faraway province; and the old patron-client relations that had bound together local societies simply disintegrated. Local rulers could no longer protect their relatives and attendants in legal cases, and with the ending in 1905 of the practice of forcing peasant farmers to work part-time for local rulers, the rulers no longer had a regular base for relations with rural populations. The old local ruling families, then, were severed from their traditional social context.

The same situation viewed from the perspective of the rural population is even more complex. According to the government’s first census of the rural population, taken in 1905, there were about thirty thousand villages in Siam. This was probably a large increase over the figure even two or three decades earlier, during the late 1800s. It is difficult to imagine it now, but Siam’s Central Plain in the late 1800s was nowhere near as densely settled as it is today. There were still forests closely surrounding Bangkok into the last half of the nineteenth century, and even at century’s end there were wild elephants and tigers roaming the countryside only twenty or thirty miles away.

Much population movement involved the opening up of new lands for rice cultivation. Two things made this possible and encouraged it to happen. First, the opening of the kingdom to the full force of international trade by the Bowring Treaty (1855) rapidly encouraged economic specialization in the growing of rice, mainly to feed the rice-deficient portions of Asia (India and China in particular). The average annual volume of rice exported from Siam grew from under 60 million kilograms per year in the late 1850s to more than 660 million kilograms per year at the turn of the century; and over the same period the average price per kilogram doubled. During the same period, the area planted in rice increased from about 230,000 acres to more than350,000 acres. This growth was achieve as the result of the collective decisions of thousands of peasants families to expand the amount of land they cultivated, clear and plant new land, or adopt more intensive methods of agriculture.

■ They were able to do so because of our second consideration. ■ They were relatively freer than they had been half a century earlier. ■ Over the course of the Fifth Reign (1868 – 1910), the ties that bound rural people to the aristocracy and local ruling elites were greatly reduced. Peasants now paid a tax on individuals instead of being required to render labor service to the government. ■ Under these conditions, it made good sense to thousands of peasant families to in effect work full-time at what they had been able to do only part-time previously because of the requirement to work for the government: grow rice for the marketplace.

Numerous changes accompanied these developments. The rural population both dispersed and grew, and was probably less homogeneous and more mobile than it had been a generation earlier. The villages became more vulnerable to arbitrary treatment by government bureaucrats as local elites now had less control over them. By the early twentieth century, as government modernization in a sense caught up with what had been happening in the countryside since the 1870s, the government bureaucracy intruded more and more into village life. Provincial police began to appear, along with district officers and cattle registration and land deeds and registration for compulsory military service. Village handicrafts diminished or died out completely as people bought imported consumer goods, like cloth and tools, instead of making them themselves. More economic variation took shape in rural villages, as some grew prosperous from farming while others did not. As well as can be measured, rural standards of living improved in the Fifth Reign. But the statistical averages mean little when measured against the harsh realities of peasant life.

Distributions of Tropical Bee Colonies

In 1977 ecologists Stephen Hubbell and Leslie Johnson recorded a dramatic example of how social interactions can produce and enforce regular spacing in a population. They studied competition and nest spacing in populations of stingless bees in tropical dry forests in Costa Rica. Though these bees do no sting, rival colonies of some species fight fiercely over potential nesting sites.

Stingless bees are abundant in tropical and subtropical environments, where they gather nectar and pollen from a wide variety of flowers. They generally nest in trees and live in colonies made up of hundreds to thousands of workers. Hubbell and Johnson observed that some species of stingless bees are highly aggressive to members of their species from other colonies, while other species are not. Aggressive species usually forage in groups and feed mainly on flowers that occur in high-density clumps. Nonaggressive species feed singly or in small groups and on more widely distributed flowers.

Hubbell and Johnson studied several species of stingless bees to determine whether there is a relationship between aggressiveness and patterns of colony distribution. They predicted that the colonies of aggressive species would show regular distributions, while those of nonaggressive species would show random or closely grouped (clumped) distribution. They concentrated their studies on a thirteen-hectare tract of tropical dry forest that contained numerous nests of nine species of stingless bees.

Though Hubbell and Johnson were interested in how bee behavior might affect colony distributions, they recognized that the availability of potential nest sites for colonies could also affect distributions. ■ So as one of the first steps in their study, they mapped the distributions of trees suitable for nesting. ■ They found that potential nest trees were distributed randomly through the study area. ■ They also found that the number of potential nest sites was much greater than the number of bee colonies. ■ What did these measurements show the researchers? The number of colonies in the study area was not limited by availability of suitable trees, and a clumped or regular distribution of colonies was not due to an underlying clumped or regular distribution of potential nest sites.

Hubbell and Johnson mapped the nests of five of the nine species of stingless bees accurately, and the nests of four of these species were distributed regularly. All four species with regular nest distributions were highly aggressive to bees from other colonies of their own species. The fifth species was not aggressive, and its nests were randomly distributed over the study area.

The researchers also studied the process by which the aggressive species establish new colonies. Their observations provide insights into the mechanisms that establish and maintain the regular nest distribution of these species. Aggressive species apparently mark prospective nest sites with pheromones, chemical substances secreted by some animals for communication with other members of their species. The pheromone secreted by these stingless bees attracts and aggregates members of their colony to the prospective nest site; however, it also attracts workers from other nests.

If workers from two different colonies arrive at the prospective nest at the same time, they may fight for possession. Fights may be escalated into protracted battles. The researchers observed battles over a nest tree that lasted for two weeks. Each dawn, fifteen to thirty workers from two competing colonies arrived at the contested nest site. The workers from the two colonies faced off in two swarms and displayed and fought with each other. In the displays, pairs of bees faced each other, slowly flew vertically to a height of about three meters, and then grappled each other to the ground. When the two bees hit the ground, they separated, faced off, and performed another aerial display. Bees did not appear to be injured in these fights, which were apparently ritualized. The two swarms abandoned the battle at about 8 or 9 A.M. each morning, only to re-form and begin again the next day just after dawn. While this contest over an unoccupied nest site produced no obvious mortality, fights over occupied nests sometimes kill over 1,000 bees in a single battle.

TPO 33

The First Civilizations

Evidence suggests that an important stimulus behind the rise of early civilizations was the development of settled agriculture, which unleashed a series of changes in the organization of human communities that culminated in the rise of large ancient empires.

The exact time and place that crops were first cultivated successfully is uncertain. Many prehistorians believe that farming may have emerged in dependently in several different areas of the world when small communities, driven by increasing population and a decline in available food resources, began to plant seeds in the ground in an effort to guarantee their survival. The first farmers, who may have lived as long as 10,000 years ago, undoubtedly used simple techniques and still relied primarily on other forms of food production, such as hunting, foraging, or pastoralism. The real breakthrough took place when farmers began to cultivate crops along the floodplains of river systems. The advantage was that crops grown in such areas were not as dependent on rainfall and therefore produced a more reliable harvest. An additional benefit was that the sediment carried by the river waters deposited nutrients in the soil, thus enabling the farmer to cultivate a single plot of ground for many years without moving to a new location. Thus, the first truly sedentary (that is, nonmigratory) societies were born. As time went on, such communities gradually learned how to direct the flow of water to enhance the productive capacity of the land, while the introduction of the iron plow eventually led to the cultivation of heavy soils not previously susceptible to agriculture.

The spread of this river valley agriculture in various parts of Asia and Africa was the decisive factor in the rise of the first civilizations. The increase in food production in these regions led to a significant growth in population, while efforts to control the flow of water to maximize the irrigation of cultivated areas and to protect the local inhabitants from hostile forces outside the community provoked the first steps toward cooperative activities on a large scale. The need to oversee the entire process brought about the emergence of an elite that was eventually transformed into a government.

The first clear steps in the rise of the first civilizations took place in the fourth and third millennia B.C. in Mesopotamia, northern Africa, India, and China. How the first governments took shape in these areas is not certain, but anthropologists studying the evolution of human communities in various parts of the world have discovered that one common stage in the process is the emergence of what are called “big men” within a single village or a collection of villages. By means of their military prowess, dominant personalities, or political talents, these people gradually emerge as the leaders of that community. In time, the “big men” become formal symbols of authority and pass on that authority to others within their own family. As the communities continue to grow in size and material wealth, the “big men” assume hereditary status, and their allies and family members are transformed into a hereditary monarchy.

The appearance of these sedentary societies had a major impact on the social organizations, religious beliefs, and way of life of the peoples living within their boundaries. ■ With the increase in population and the development of centralized authority came the emergence of the cities. ■ While some of these urban centers were identified with a particular economic function, such as proximity to gold or iron deposits or a strategic location on a major trade route, others served primarily as administrative centers or the site of temples for the official cult or other ritual observances. ■ Within these cities, new forms of livelihood appeared to satisfy the growing need for social services and consumer goods. ■ Some people became artisans or merchants, while others became warriors, scholars, or priests. In some cases, the physical division within the first cities reflected the strict hierarchical character of the society as a whole, with a royal palace surrounded by an imposing wall and separate from the remainder of the urban population. In other instances, such as the Indus River Valley, the cities lacked a royal precinct and the ostentatious palaces that marked their contemporaries elsewhere.

Railroads and Commercial Agriculture in Nineteenth-Century United States

By 1850 the United States possessed roughly 9,000 miles of railroad track; ten years later it had over 30,000 miles, more than the rest of the world combined. Much of the new construction during the 1850s occurred west of the Appalachian Mountains – over 2,000 miles in the states of Ohio and Illinois alone.

The effect of the new railroad lines rippled outward through the economy. Farmers along the tracks began to specialize in corps that they could market in distant locations. With their profits they purchased manufactured goods that earlier they might have made at home. Before the railroad reached Tennessee, the state produced about 25,000 bushels (or 640 tons) of wheat, which sold for less than 50 cents a bushel. Once the railroad came, farmers in the same counties grew 400,000 bushels (over 10,000 tons) and sold their crop at a dollar a bushel.

The new railroad networks shifted the direction of western trade. ■ In 1840 most northwestern grain was shipped south down the Mississippi River to the bustling port of New Orleans. ■ But low water made steamboat travel hazardous in summer, and ice shut down traffic in winter. ■ Products such as lard, tallow, and cheese quickly spoiled if stored in New Orleans’ hot and humid warehouses. ■ Increasingly, traffic from the Midwest flowed west to east, over the new rail lines. Chicago became the region’s hub, linking the farms of the upper Midwest to New York and other eastern cities by more than 2,000 miles of track in 1855. Thus while the value of goods shipped by river to New Orleans continued to increase, the South’s overall share of western trade dropped dramatically.

A sharp rise in demand for grain abroad also encouraged farmers in the Northeast and Midwest to become more commercially oriented. Wheat, which in 1845 commanded $1.08 a bushel in New York City, fetched $2.46 in 1855; in similar fashion the price of corn nearly doubled. Farmers responded by specializing in cash crops, borrowing to purchase more land, and investing in equipment to increase productivity.

As railroad lines fanned out from Chicago, farmers began to acquire open prairie land in Illinois and then Iowa, putting the fertile, deep black soil into production. Commercial agriculture transformed this remarkable treeless environment. To settlers accustomed to eastern woodlands, the thousands of square miles of tall grass were an awesome sight. Indian grass, Canada wild rye, and native big bluestem all grew higher than a person. Because eastern plows could not penetrate the densely tangled roots of prairie grass, the earliest settlers erected farms along the boundary separating the forest from the prairie. In 1837, however, John Deere patented a sharp-cutting steel plow that sliced through the sod without soil sticking to the blade. Cyrus McCormick refined a mechanical reaper that harvested fourteen times more wheat with the same amount of labor. By the 1850s McCormick was selling 1,000 reapers a year and could not keep up with demand, while Deere turned out 10,000 plows annually.

The new commercial farming fundamentally altered the Midwestern landscape and the environment. Native Americans had grown corn in the region for years, but never in such large fields as did later settlers who became farmers, whose surpluses were shipped east. Prairie farmers also introduced new crops that were not part of the earlier ecological system, notably wheat, along with fruits and vegetables.

Native grasses were replaced by a small number of plants cultivated as commodities. Corn had the best yields, but it was primarily used to feed livestock. Because bread played a key role in the American and European diet, wheat became the major cash crop. Tame grasses replaced native grasses in pastures for making hay.

Western farmers altered the landscape by reducing the annual fires that had kept the prairie free from trees. In the absence of these fires, trees reappeared on land not in cultivation and, if undisturbed, eventually formed woodlots. The earlier unbroken landscape gave way to independent farms, each fenced off in a precise checkerboard pattern. It was an artificial ecosystem of animals, woodlots, and crops, whose large, uniform layout made western farms more efficient than the more-irregular farms in the East.

Extinction Episodes of the Past

It was not until the Cambrian period, beginning about 600 million years ago, that a great proliferation of macroscopic species occurred on Earth and produced a fossil record that allows us to track the rise and fall of biodiversity. Since the Cambrian period, biodiversity has generally risen, but there have been some notable exceptions. Biodiversity collapsed dramatically during at least five periods because of mass extinctions around the globe. The five major mass extinctions receive most of the attention, but they are only one end of a spectrum of extinction events. Collectively, more species went extinct during smaller events that were less dramatic but more frequent. The best known of the five major extinction events, the one that saw the demise of the dinosaurs, is the Cretaceous-Tertiary extinction.

Starting about 280 million years ago, reptiles were the dominant large animals in terrestrial environments. In popular language this was the era “when dinosaurs ruled Earth,” when a wide variety of reptile species occupying many ecological niches. However, no group or species can maintain its dominance indefinitely, and when, after over 200 million years, the age of dinosaurs came to a dramatic end about 65 million years ago, mammals began to flourish, evolving from relatively few types of small terrestrial animals into the myriad of diverse species, including bats and whales, that we know today. Paleontologists label this point in Earth’s history as the end of the Cretaceous period and the beginning of the Tertiary period, often abbreviated as the K-T boundary. This time was also marked by changes in many other types of organisms. Overall, about 38 percent of the families of marine animals were lost, with percentages much higher in some groups Ammonoid mollusks went from being very diverse and abundant to being extinct. An extremely abundant set of planktonic marine animals called foraminifera largely disappeared, although they rebounded later. Among plants, the K-T boundary saw a sharp but brief rise in the abundance of primitive vascular plants such as ferns, club mosses, horsetails, and conifers and other gymnosperms. The number of flowering plants (angiosperms) was reduced at this time, but they then began to increase dramatically.

What caused these changes? For many years scientists assumed that a cooling of the climate was responsible, with dinosaurs being particularly vulnerable because, like modern reptiles, they were ectothermic (dependent on environmental heat, or cold-blooded). It is now widely believed that at least some species of dinosaurs had a metabolic rate high enough for them to be endotherms (animals that maintain a relatively consistent body temperature by generating heat internally). Nevertheless, climatic explanations for the K-T extinction are not really challenged by the ideas that dinosaurs may have been endothermic, because even endotherms can be affected by a significant change in the climate.

Explanations for the K-T extinction were revolutionized in 1980 when a group of physical scientists led by Luis Alvarez proposed that 65 million years ago Earth was stuck by a 10-kilometer-wide meteorite traveling at 90,000 kilometers per hour. They believed that this impact generated a thick cloud of dust that enveloped Earth, shutting out much of the incoming solar radiation and reducing plant photosynthesis to very low levels. Short-term effects might have included huge tidal waves and extensive fires. In other words, a series of events arising from a single cataclysmic event caused the massive extinctions. ■ Initially, the meteorite theory was based on a single line of evidence. ■ At locations around the globe, geologists had found an unusually high concentration of iridium in the layer of sedimentary rocks that was formed about 65 million years ago. ■ Iridium is an element that is usually uncommon near Earth’s surface, but it is abundant in some meteorites. ■ Therefore, Alvarez and his colleagues concluded that it was likely that the iridium in sedimentary rocks deposited at the K-T boundary had originated in a giant meteorite or asteroid. Most scientist came to accept the meteorite theory after evidence came to light that a circular formation, 180 kilometers in diameter and centered on the north coast of the Yucatan Peninsula, was created by a meteorite impact about 65 million years ago.

TPO extra 1

POPULATION AND CLIMATE

The human population on Earth has grown to the point that it is having an effect on Earth’s atmosphere and ecosystems. Burning of fossil fuels, deforestation, urbanization, cultivation of rice and cattle, and the manufacture of chlorofluorocarbons (CFCs) for propellants and refrigerants are increasing the concentration of carbon dioxide, methane, nitrogen oxides, sulphur oxides, dust, and CFCs in the atmosphere. About 70 percent of the Sun’s energy passes through the atmosphere and strikes Earth’s surface. This radiation heats the surface of the land and ocean, and these surfaces then reradiate infrared radiation back into space. This allows Earth to avoid heating up too much. However, not all of the infrared radiation makes it into space; some is absorbed by gases in the atmosphere and is reradiated back to Earth’s surface. A greenhouse gas is one that absorbs infrared radiation and then reradiates some of this radiation back to Earth. Carbon dioxide, CFCs, methane, and nitrogen oxides are greenhouse gases. The natural greenhouse effect of our atmosphere is well established. In fact, without greenhouse gases in the atmosphere, scientists calculate that Earth would be about 33℃ cooler than it currently is.

     The current concentration of carbon dioxide in the atmosphere is about 360 parts per million. Human activities are having a major influence on atmospheric carbon dioxide concentrations, which are rising so fast that current predictions are that atmospheric concentrations of carbon dioxide will double in the next 50 to 100 years. The Intergovernmental Panel on Climate Change (IPCC) report in 1992, which represents a consensus of most atmospheric scientists, predicts that a doubling of carbon dioxide concentration would raise global temperatures anywhere between 1.4℃ and 4.5℃. The IPCC report issued in 2001 raised the temperature prediction almost twofold. The suggested rise in temperature is greater than the changes that occurred in the past between ice ages. The increase in temperatures would not be uniform, with the smallest changes at the equator and changes two or three times as great at the poles. The local effects of these global changes are difficult to predict, but it is generally agreed that they may include alterations in ocean currents, increased winter flooding in some areas of the Northern Hemisphere, a higher incidence of summer drought in some areas, and rising sea levels, which may flood low-lying countries.

     Scientists are actively investigating the feedback mechanism within the physical, chemical, and biological components of Earth’s climate system in order to make accurate predictions of the effects the rise in greenhouse gases will have on future global climates. Global circulation models are important tools in this process. These models incorporate current knowledge on atmospheric circulation patterns, ocean currents, the effect of landmasses, and the like to predict climate under changed conditions. There are several models, and all show agreement on a global scale. For example, all models show substantial changes in climate when carbon dioxide concentration is doubled. However, there are significant differences in the regional climates predicted by different models. Most models project greater temperature increases in mid-latitude regions and in mid-continental regions relative to the global average. Additionally, changes in precipitation patterns are predicted, with decreases in mid-latitude regions and increased rainfall in some tropical areas. Finally, most models predict that there will be increased occurrences of extreme events, such as extended periods without rain (drought), extreme heat waves, greater seasonal variation in temperatures, and increases in the frequency and magnitude of severe storms. Plants and animals have strong responses to virtually every aspect of these projected global changes.

     The challenge of predicting organismal responses to global climate change is difficult. ■ Partly, this is due to the fact that there are more studies of short-term, individual organism responses than there are of long-term, systemwide studies. ■ It is extremely difficult, both monetarily and physically, for scientists to conduct field studies at spatial and temporal scales that are large enough to include all the components of real-world systems, especially ecosystems with large, freely ranging organisms. ■ One way paleobiologists try to get around this limitation is to attempt to reconstruct past climates by examining fossil life. ■

     The relative roles that abiotic and biotic factors play in the distribution of organisms is especially important now, when the world is confronted with the consequences of a growing human population. Changes in climate, land use, and habitat destruction are currently causing dramatic decreases in biodiversity throughout the world. An understanding of climate-organism relationships is essential to efforts to preserve and manage Earth’s biodiversity.

EUROPE IN THE TWELFTH CENTURY

Europe in the eleventh century underwent enormous social, technological, and economic changes, but this did not create a new Europe—it created two new ones. ■ The north was developed as a rigidly hierarchical society in which status was determined, or was at least indicated, by the extent to which one owned, controlled, or labored on land; whereas the Mediterranean south developed a more fluid, and therefore more chaotic, world in which industry and commerce predominated and social status both reflected and resulted from the role that one played in the public life of the community. ■ In other words, individual identity and social community in the north were established on a personal basis, whereas in the south they were established on a civic basis. ■ By the start of the twelfth century, northern and southern Europe were very different places indeed, and the Europeans themselves noticed it and commented on it. ■

     Political dominance belonged to the north. Germany, France, and England had large populations and large armies that made them, in the political and military senses, the masters of western Europe. Organized by the practices known collectively as feudalism, these kingdoms emerged as powerful states with sophisticated machineries of government. Their kings and queens were the leading figures of the age; their castles and cathedrals stood majestically on the landscape as symbols of their might; their armies both energized and defined the age. Moreover, feudal society showed a remarkable ability to adapt to new needs by encouraging the parallel development of domestic urban life and commercial networks; in some regions of the north, in fact, feudal society may even have developed in response to the start of the trends toward bigger cities. But southern Europe took the lead in economic and cultural life. Though the leading Mediterranean states were small in size, they were considerably wealthier than their northern counterparts. The Italian city of Palermo in the twelfth century, for example, alone generated four times the commercial tax revenue of the entire kingdom of England. Southern communities also possessed urbane, multilingual cultures that made them the intellectual and artistic leaders of the age. Levels of general literacy in the south far surpassed those of the north, and the people of the south put that learning to use on a large scale. Science, mathematics, poetry, law, historical writing, religious speculation, translation, and classical studies all began to flourish; throughout most of the twelfth century, most of the continent’s best brains flocked to southern Europe.

     So too did a lot of the north’s soldiers. One of the central themes of the political history of the twelfth century was the continual effort by the northern kingdoms to extend their control southward in the hope of tapping into the Mediterranean bonanza. The German emperors starting with Otto I (936-973), for example, struggled ceaselessly to establish their control over the cities of northern Italy, since those cities generated more revenue than all of rural Germany combined. The kings of France used every means at their disposal to push the lower border of their kingdom to the Mediterranean shoreline. And the Normans who conquered and ruled England established outposts of Norman power in Sicily and the adjacent lands of southern Italy; the English kings also hoped or claimed at various times to be, either through money or marriage diplomacy, the rulers of several Mediterranean states. But as the northern world pressed southward, so too did some of the cultural norms and social mechanisms of the south expand northward. Over the course of the twelfth century, the feudal kingdoms witnessed a proliferation of cities modeled in large degree on those of the south. Contact with the merchants and financiers of the Mediterranean led to the development of northern industry and international trade (which helped to pay for many of the castles and cathedrals mentioned earlier). And education spread as well, culminating in the foundation of what is arguably medieval Europe’s greatest invention: the university. The relationship of north and south was symbiotic, in other words, and the contrast between them was more one of differences in degree than of polar opposition.

     feudalism: a political and economic system based on the relationship of a lord to people of lower status, who owed service and/or goods to the lord in exchange for the use of land.

WHAT IS A COMMUNITY?

The Black Hills forest, the prairie riparian forest, and other forests of the western United States can be separated by the distinctly different combinations of species they comprise. It is easy to distinguish between prairie riparian forest and Black Hills forest—one is a broad-leaved forest of ash and cottonwood trees, the other is a coniferous forest of ponderosa pine and white spruce trees. One has kingbirds; the other, juncos (birds with white outer tail feathers). The fact that ecological communities are, indeed, recognizable clusters of species led some early ecologists, particularly those living in the beginning of the twentieth century, to claim that communities are highly integrated, precisely balanced assemblages. This claim harkens back to even earlier arguments about the existence of a balance of nature, where every species is there for a specific purpose, like a vital part in a complex machine. Such a belief would suggest that to remove any species, whether it be plant, bird, or insect, would somehow disrupt the balance, and the habitat would begin to deteriorate. Likewise, to add a species may be equally disruptive.

     One of these pioneer ecologists was Frederick Clements, who studied ecology extensively throughout the Midwest and other areas in North America. He held that within any given region of climate, ecological communities tended to slowly converge toward a single endpoint, which he called the “climatic climax.” This “climax” community was, in Clements’s mind, the most well-balanced, integrated grouping of species that could occur within that particular region. Clements even thought that the process of ecological succession—the replacement of some species by others over time—was somewhat akin to the development of an organism, from embryo to adult. Clements thought that succession represented discrete stages in the development of the community (rather like infancy, childhood, and adolescence), terminating in the climatic “adult” stage, when the community became self-reproducing and succession ceased. Clements’s view of the ecological community reflected the notion of a precise balance of nature.

     Clements was challenged by another pioneer ecologist, Henry Gleason, who took the opposite view. Gleason viewed the community as largely a group of species with similar tolerances to the stresses imposed by climate and other factors typical of the region. Gleason saw the element of chance as important in influencing where species occurred. His concept of the community suggests that nature is not highly integrated. Gleason thought succession could take numerous directions, depending upon local circumstances.

     ■ Who was right? ■ Many ecologists have made precise measurements, designed to test the assumptions of both the Clements and Gleason models. ■ For instance, along mountain slopes, does one life zone, or habitat type, grade sharply or gradually into another? ■ If the divisions are sharp, perhaps the reason is that the community is so well integrated, so holistic, so like Clements viewed it, that whole clusters of species must remain together. If the divisions are gradual, perhaps, as Gleason suggested, each species is responding individually to its environment, and clusters of species are not so integrated that they must always occur together.

     It now appears that Gleason was far closer to the truth than Clements. The ecological community is largely an accidental assemblage of species with similar responses to a particular climate. Green ash trees are found in association with plains cottonwood trees because both can survive well on floodplains and the competition between them is not so strong that only one can persevere. One ecological community often flows into another so gradually that it is next to impossible to say where one leaves off and the other begins. Communities are individualistic.

     This is not to say that precise harmonies are not present within communities. Most flowering plants could not exist were it not for their pollinators—and vice versa. Predators, disease organisms, and competitors all influence the abundance and distribution of everything from oak trees to field mice. But if we see a precise balance of nature, it is largely an artifact of our perception, due to the illusion that nature, especially a complex system like a forest, seems so unchanging from one day to the next.

TPO extra 2

HABITATS AND CHIPMUNK SPECIES

There are eight chipmunk species in the Sierra Nevada mountain range, and most of them look pretty much alike. But eight different species of chipmunks scurrying around a picnic area will not be found. Nowhere in the Sierra do all eight species occur together. Each species tends strongly to occupy a specific habitat type, within an elevational range, and the overlap among them is minimal.

    The eight chipmunk species of the Sierra Nevada represent but a few of the 15 species found in western North America, yet the whole of eastern North America makes do with but one species: the Eastern chipmunk. Why are there so many very similar chipmunks in the West? The presence of tall mountains interspersed with vast areas of arid desert and grassland makes the West ecologically far different from the East. The West affords much more opportunity for chipmunk populations to become geographically isolated from one another, a condition of species formation. Also, there are more extremes in western habitats. In the Sierra Nevada, high elevations are close to low elevations, at least in terms of mileage, but ecologically they are very different.

    Most ecologists believe that ancient populations of chipmunks diverged genetically when isolated from one another by mountains and unfavorable ecological habitat. These scattered populations first evolved into races—adapted to the local ecological conditions—and then into species, reproductively isolated from one another. This period of evolution was relatively recent, as evidenced by the similar appearance of all the western chipmunk species.

     Ecologists have studied the four chipmunk species that occur on the eastern slope of the Sierra and have learned just how these species interact while remaining separate, each occupying its own elevational zone. The sagebrush chipmunk is found at the lowest elevation, among the sagebrush. The yellow pine chipmunk is common in low to mid-elevations and open conifer forests, including and ponderosa and Jeffrey pine forests. The lodge pole chipmunk is found at higher elevations, among the lodge poles, firs, and high-elevation pines. The alpine chipmunk is higher still, venturing among the talus slopes, alpine meadows, and high-elevation pines and junipers. ■ Obviously, the ranges of each species overlap. ■ Why don’t sagebrush chipmunks move into the pine zones? ■ Why don’t alpine chipmunks move to lower elevations and share the conifer forests with lodge pole chipmunks? ■

     The answer, in one word, is aggression. Chipmunk species actively defend their ecological zones from encroachment by neighboring species. The yellow pine chipmunk is more aggressive than the sagebrush chipmunk, possibly because it is a bit larger. It successfully bullies its smaller evolutionary cousin, excluding it from the pine forests. Experiments have shown that the sagebrush chipmunk is physiologically able to live anywhere in the Sierra Nevada, from high alpine zones to the desert. The little creature is apparently restricted to the desert not because it is specialized to live only there but because that is the only habitat where none of the other chipmunk species can live. The fact that sagebrush chipmunks tolerate very warm temperatures makes them, and only them, able to live where they do. The sagebrush chipmunk essentially occupies its habitat by default. In one study, ecologists established that yellow pine chipmunks actively exclude sagebrush chipmunks from pine forests; the ecologists simply trapped all the yellow pine chipmunks in a section of forest and moved them out. Sagebrush chipmunks immediately moved in, but yellow pine chipmunks did not enter sagebrush desert when sagebrush chipmunks were removed.

     The most aggressive of the four eastern-slope species is the lodge pole chipmunk, a feisty rodent indeed. It actively prevents alpine chipmunks from moving downslope, and yellow pine chipmunks from moving upslope. There is logic behind the lodge pole’s aggressive demeanor. It lives in the cool, shaded conifer forests, and of the four species, it is the least able to tolerate heat stress. It is, in other words, the species of the strictest habitat needs: it simply must be in those shaded forests. However, if it shared its habitat with alpine and yellow pine chipmunks, either or both of these species might out compete it, taking most of the available food. Such a competition could effectively eliminate lodge pole chipmunks from the habitat. Lodge poles survive only by virtue of their aggression.

CETACEAN INTELLIGENCE

We often hear that whales, dolphins, and porpoises are as intelligent as humans, maybe even more so. Are they really that smart? There is no question that cetaceans are among the most intelligent of animals. Dolphins, killer whales, and pilot whales in captivity quickly learn tricks. The military has trained bottlenose dolphins to find bombs and missile heads and to work as underwater spies.

     This type of learning, however, is called conditioning. ■ The animal simply learns that when it performs a particular behavior, it gets a reward, usually a fish. ■ Many animals, including rats, birds, and even invertebrates, can be conditioned to perform tricks. ■ We certainly don’t think of these animals as our mental rivals. ■ Unlike most other animals, however, dolphins quickly learn by observations and may spontaneously imitate human activities. One tame dolphin watched a diver cleaning an underwater viewing window, seized a feather in its beak, and began imitating the diver—complete with sound effects! Dolphins have also been seen imitating seals, turtles, and even water-skiers.

     Given the seeming intelligence of cetaceans, people are always tempted to compare them with humans and other animals. Studies on discrimination and problem-solving skills in the bottlenose dolphin, for instance, have concluded that its intelligence lies “somewhere between that of a dog and a chimpanzee.” Such comparisons are unfair. It is important to realize that intelligence is a very human concept and that we evaluate it in human terms. After all, not many people would consider themselves stupid because they couldn’t locate and identify a fish by its echo. Why should we judge cetaceans by their ability to solve human problems?

     Both humans and cetaceans have large brains with an expanded and distinctively folded surface, the cortex. The cortex is the dominant association center of the brain, where abilities such as memory and sensory perception are centered. Cetaceans have larger brains than ours, but the ratio of brain to body weight is higher in humans. Again, direct comparisons are misleading. In cetaceans it is mainly the portions of the brain associated with hearing and the processing of sound information that are expanded. The enlarged portions of our brain deal largely with vision and hand-eye coordination. Cetaceans and humans almost certainly perceive the world in very different ways. Their world is largely one of sounds, ours one of sights.

     Contrary to what is depicted in movies and on television, the notion of “talking” to dolphins is also misleading. Although they produce a rich repertoire of complex sounds, they lack vocal cords and their brains probably process sound differently from ours. Bottlenose dolphins have been trained to make sounds through the blowhole that sound something like human sounds, but this is a far cry from human speech. By the same token, humans cannot make whale sounds. We will probably never be able to carry on an unaided conversation with cetaceans.

     As in chimpanzees, captive bottlenose dolphins have been taught American Sign Language. These dolphins have learned to communicate with trainers who use sign language to ask simple questions. Dolphins answer back by pushing a “yes” or “no” paddle. They have even been known to give spontaneous responses not taught by the trainers. Evidence also indicates that these dolphins can distinguish between commands that differ from each other only by their word order, a truly remarkable achievement. Nevertheless, dolphins do not seem to have a real language like ours. Unlike humans, dolphins probably cannot convey very complex messages.

     Observations of cetaceans in the wild have provided some insights on their learning abilities. Several bottlenose dolphins off western Australia, for instance, have been observed carrying large cone-shaped sponges over their beaks. They supposedly use the sponges for protection against stingrays and other hazards on the bottom as they search for fish to eat. This is the first record of the use of tools among wild cetaceans.

     Instead of “intelligence,” some people prefer to speak of “awareness.” In any case, cetaceans probably have a very different awareness and perception of their environment than do humans. Maybe one day we will come to understand cetaceans on their terms instead of ours, and perhaps we will discover a mental sophistication rivaling our own.

A MODEL OF URBAN EXPANSION

In the early twentieth century, the science of sociology found supporters in the United States and Canada partly because the cities there were growing so rapidly. It often appeared that North American cities would be unable to absorb all the new comers arriving in such large numbers. Presociological thinkers like Frederick Law Olmsted, the founder of the movement to build parks and recreation areas in cities, and Jacob Riis, an advocate of slum reform, urged the nation’s leaders to invest in improving the urban environment, building parks and beaches, and making better housing available to all. These reform efforts were greatly aided by sociologists who conducted empirical research on the social conditions in cities. In the early twentieth century, many sociologists lived in cities like Chicago that were characterized by rapid population growth and serious social problems. It seemed logical to use empirical research to construct theories about how cities grow and change in response to major social forces as well as more controlled urban planning.

The founders of the Chicago school of sociology, Robert Park and Ernest Burgess, attempted to develop a dynamic model of the city, one that would account not only for the expansion of cities in terms of population and territory but also for the patterns of settlement and land use within cities. They identified several factors that influence the physical form of cities. As Park stated, among them are “transportation and communication, tramways and telephones, newspapers and advertising, steel construction and elevators—all things, in fact, which tend to bring about at once a greater mobility and a greater concentration of the urban populations.”

Park and Burgess based their model of urban growth on the concept of “natural areas”—that is, areas such as occupational suburbs or residential enclaves in which the population is relatively homogeneous and land is used in similar ways without deliberate planning. Park and Burgess saw urban expansion as occurring through a series of “invasions” of successive zones or areas surrounding the center of the city. For example, people from rural areas and other societies “invaded” areas where housing was inexpensive. Those areas tended to be close to the places where they worked. In turn, people who could afford better housing and the cost of commuting “invaded” areas farther from the business district.

Park and Burgess’s model has come to be known as the “concentric-zone model’ (represented by the figure). Because the model was originally based on studies of Chicago, its center is labeled “Loop,” the term commonly applied to that city’s central commercial zone. Surrounding the central zone is a “zone in transition,” an area that is being invaded by business and light manufacturing. The third zone is inhabited by workers who do not want to live in the factory or business district but at the same time need to live reasonably close to where they work. The fourth or residential zone consists of upscale apartment buildings and single-family homes. And the outermost ring, outside the city limits, is the suburban or commuters’ zone; its residents live within a 30- to 60-minute ride of the central business district.

Studies by Park, Burgess, and other Chicago-school sociologists showed how new groups of immigrants tended to be concentrated in separate areas within inner-city zones, where they sometimes experienced tension with other ethnic groups that had arrived earlier. Over time, however, each group was able to adjust to life in the city and to find a place for itself in the urban economy. ■ Eventually many of the immigrants moved to unsegregated areas in outer zones; the areas they left behind were promptly occupied by new waves of immigrants.

The Park and Burgess model of growth in zones and natural areas of the city can still be used to describe patterns of growth in cities that were built around a central business district and that continue to attract large numbers of immigrants. ■ But this model is biased toward the commercial and industrial cities of North America, which have tended to form around business centers rather than around palaces or cathedrals, as is often the case in some other parts of the world. ■ Moreover, it fails to account for other patterns of urbanization, such as the rapid urbanization that occurs along commercial transportation corridors and the rise of nearby satellite cities. ■

TPO34

Islamic Art and the Book

The arts of the Islamic book, such as calligraphy and decorative drawing, developed during A.D. 900 to 1500, and luxury books are some of the most characteristic examples of Islamic art produced in this period. This came about from two major developments: paper became common, replacing parchment as the major medium for writing, and rounded scripts were regularized and perfected so that they replaced the angular scripts of the previous period, which because of their angularity were uneven in height. Books became major vehicles for artistic expression, and the artists who produced them, notably calligraphers and painters, enjoyed high status, and their workshops were often sponsored by princes and their courts. Before A. D. 900, manuscripts of the Koran (the book containing the teachings of the Islamic religion) seem to have been the most common type of book produced and decorated, but after that date a wide range of books were produced for a broad spectrum of patrons. These continued to include, of course, manuscripts of the Koran, which every Muslim wanted to read, but scientific works, histories, romances, and epic and lyric poetry were also copied in fine handwriting and decorated with beautiful illustrations. Most were made for sale on the open market, and cities boasted special souks (markets) where books were bought and sold. The mosque of Marrakech in Morocco is known as the Kutubiyya, or Booksellers’ Mosque, after the adjacent market. Some of the most luxurious books were specific commissions made at the order of a particular prince and signed by the calligrapher and decorator.

Papermaking had been introduced to the Islamic lands from China in the eighth century. ■ It has been said that Chinese papermakers were among the prisoners captured in a battle fought near Samarqand between the Chinese and the Muslims in 751, and the technique of papermaking E in which cellulose pulp extracted from any of several plants is first suspended in water, caught on a fine screen, and then dried into flexible sheets E slowly spread westward. ■ Within fifty years, the government in Baghdad was using paper for documents. ■ Writing in ink on paper, unlike parchment, could not easily be erased, and therefore paper had the advantage that it was difficult to alter what was written on it. ■ Papermaking spread quickly to Egypt E and eventually to Sicily and Spain E but it was several centuries before paper supplanted parchment for copies of the Koran, probably because of the conservative nature of religious art and its practitioners. In western Islamic lands, parchment continued to be used for manuscripts of the Koran throughout this period.

The introduction of paper spurred a conceptual revolution whose consequences have barely been explored. Although paper was never as cheap as it has become today, it was far less expensive than parchment, and therefore more people could afford to buy books, Paper is thinner than parchment, so more pages could be enclosed within a single volume. At first, paper was made in relatively small sheets that were pasted together, but by the beginning of the fourteenth century, very large sheets E as much as a meter across E were available. These large sheets meant that calligraphers and artists had more space on which to work. Paintings became more complicated, giving the artist greater opportunities to depict space or emotion. The increased availability of paper, particularly after 1250, encouraged people to develop systems of representation, such as architectural plans and drawings. This in turn allowed the easy transfer of artistic ideas and motifs over great distances from one medium to another, and in a different scale in ways that had been difficult, if not impossible, in the previous period.

Rounded styles of Arabic handwriting had long been used for correspondence and documents alongside the formal angular scripts used for inscriptions and manuscripts of the Koran. Around the year 900, Ibn Muqla, who was a secretary and vizier at the Abbasid court in Baghdad, developed a system of proportioned writing. He standardized the length of alif, the first letter of the Arabic alphabet, and then determined what the size and shape of all other letters should be, based on the alif. Eventually, six round forms of handwriting, composed of three pairs of big and little scripts known collectively as the Six Pens, became the standard repertory of every calligrapher.

The Development of Steam Power

By the eighteenth century, Britain was experiencing a severe shortage of energy. ■ Because of the growth of population, most of the great forests of medieval Britain had long ago been replaced by fields of grain and hay. ■ Wood was in ever-shorter supply, yet it remained tremendously important. ■ It served as the primary source of heat for all homes and industries and as a basic raw material. ■ Processed wood (charcoal) was the fuel that was mixed with iron ore in the blast furnace to produce pig iron (raw iron). The iron industry’s appetite for wood was enormous, and by 1740 the British iron industry was stagnating. Vast forests enabled Russia to become the world’s leading producer of iron, much of which was exported to Britain. But Russia’s potential for growth was limited too, and in a few decades Russia would reach the barrier of inadequate energy that was already holding England back.

As this early energy crisis grew worse, Britain looked toward its abundant and widely scattered reserves of coal as an alternative to its vanishing wood. Coal was first used in Britain in the late Middle Ages as a source of heat. By 1640 most homes in London were heated with it, and it also provided heat for making beer, glass, soap, and other products. Coal was not used, however, to produce mechanical energy or to power machinery. It was there that coal’s potential wad enormous.

As more coal was produced, mines were dug deeper and deeper and were constantly filling with water. Mechanical pumps, usually powered by hundreds of horses waling in circles at the surface, had to be installed. Such power was expensive and bothersome. In an attempt to overcome these disadvantages, Thomas Savery in 1698 and Thomas Newcomen in 1705 invented the first primitive steam engines. Both engines were extremely inefficient. Both burned coal to produce steam, which was then used to operate a pump. However, by the early 1770s, many of the Savery engines and hundreds of the Newcomen engines were operating successfully, though inefficiently, in English and Scottish mines.

In the early 1760s, a gifted young Scot named James Watt was drawn to a critical study of the steam engine. Watt was employed at the time by the University of Glasgow as a skilled crafts worker making scientific instruments. In 1763, Watt was called on to repair a Newcomen engine being used in a physics course. After a series of observations, Watt saw that the New comen’s waste of energy could be reduced by adding a separate condenser. This splendid invention, patented in 1769, greatly increased the efficiency of the steam engine. The steam engine of Watt and his followers was the technological advance that gave people, at least for a while, unlimited power and allowed the invention and use of all kinds of power equipment.

The steam engine was quickly put to use in several industries in Britain. It drained mines and made possible the production of ever more coal to feed steam engines elsewhere. The steam power plant began to replace waterpower in the cotton-spinning mills as well as other industries during the 1780s, contributing to a phenomenal rise in industrialization. The British iron industry was radically transformed. The use of powerful, steam-driven bellows in blast furnaces helped iron makers switch over rapidly from limited charcoal to unlimited coke (which is made from coal) in the smelting of pig iron (the process of refining impure iron) after 1770 in the 1780s, Henry Cort developed the puddling furnace, which allowed pig iron to be refined in turn with coke. Cort also developed heavy-duty, steam-powered rolling mills, which were capable of producing finished iron in every shape and form.

The economic consequence of these technical innovations in steam power was a great boom in the British iron industry. In 1740 annual British iron production was only 17,000 tons, but by 1844, with the spread of coke smelting and the impact of Cort’s inventions, it had increased to 3,000,000 tons. This was a truly amazing expansion. Once scarce and expensive, iron became cheap, basic, and indispensable to the economy.

Protection of Plants by Insects

Many plants – one or more species of at least 68 different families – can secrete nectar even when they have no blossoms, because they bear extrafloral nectaries (structures that produce nectar) on stems, leaves, leaf stems, or other structures. These plants usually occur where ants are abundant, most in the tropics but some in temperate areas. Among those of northeastern North America are various plums, cherries, roses, hawthorns, poplars, and oaks. Like floral nectar, extrafloral nectar consists mainly of water with a high content of dissolved sugars and, in some plants, small amounts of amino acids. The extrafloral nectaries of some plants are known to attract ants and other insects, but the evolutionary history of most plants with these nectaries is unknown. Nevertheless, most ecologists believe that all extrafloral nectaries attract insects that will defend the plant.

Ants are portably the most frequent and certainly the most persistent defenders of plants. ■ Since the highly active worker ants require a great deal of energy, plants exploit this need by providing extrafloral nectar that supplies ants with abundant energy. ■ To return this favor, ants guard the nectaries, driving away or killing intruding insects that might compete with ants for nectar. ■ Many of these intruders are herbivorous and would eat the leaves of the plants. ■

Biologists once thought that secretion of extrafloral nectar has some purely internal physiological function, and that ants provide no benefit whatsoever to the plants that secrete it. This view and the opposing “protectionist” hypothesis that ants defend plants had been disputed for over a hundred years when, in 1910, a skeptical William Morton Wheeler commented on the controversy. He called for proof of the protectionist view: that visitations of the ants confer protection on the plants and that in the absence of the insects a much greater number would perish or fail to produce flowers or seeds than when the insects are present. That we now have an abundance of the proof that was called for was established when Barbara Bentley reviewed the relevant evidence in 1977, and since then many more observations and experiments have provided still further proof that ants benefit plants.

One example shows how ants attracted to extrafloral nectaries protect morning glories against attacking insects. The principal insect enemies of the North American morning glory feed mainly on its flowers or fruits rather than its leaves. Grasshoppers feeding on flowers indirectly block pollination and the production of seeds by destroying the corolla or the stigma, which receives the pollen grains and on which the pollen germinates. Without their colorful corolla, flowers do not attract pollinators and are not fertilized. An adult grasshopper can consume a large corolla, about 2.5 inches long, in an hour. Caterpillars and seed beetles affect seed production directly. Caterpillars devour the ovaries, where the seeds are produced, and seed beetle larvae eat seeds as they burrow in developing fruits.

Extrafloral nectaries at the base of each sepal attract several kinds of insects, but 96 percent of them are ants, several different species of them. When buds are still small, less than a quarter of an inch long, the sepal nectaries are already present and producing nectar. They continue to do so as the flower develops and while the fruit matures. Observations leave little doubt that ants protect morning glory flowers and fruits from the combined enemy force of grasshoppers, caterpillars, and seed beetles. Bentley compares the seed production of six plants that grew where there were no ants with that of seventeen plants that were occupied by ants. Unprotected plants bore only 45 seeds per plant, but plants occupied by ants bore 211 seeds per plant. Although ants are not big enough to kill or seriously injure grasshoppers, they drive them away by nipping at their feet. Seed beetles are more vulnerable because they are much smaller than grasshoppers. The ants prey on the adult beetles, disturb females as they lay their eggs on developing fruits, and eat many of the eggs they do manage to lay.

پیشین، پیشی، سابق، مقدم، مقدمه، سابقه، مرجع، ضمیر، دودمان، تبار

***************************************

بصورت مادی و خارجی مجسم کردن  

که بوسیله ان، که به موجب ان بچه وسیله

غریزه، شعور حیوانی، هوش طبیعی جانوران

اهمیت، جدیت، وقار، شدت

تماشا، منظره، نمایش (در جمع) عینک

به سوی جلو، به پیش، به جلو

گذشته و قسمت سوم seek

پیکر کوچک، مجسمه سفالین رنگی

چمن، چمنزار، مرغزار، فلات، چمن زار

چنانکه شایسته عکس برداری یانقاشی باشد، بطور روشن

بیاد آوردن، فراخواندن، معزول کردن

سرنوشت، تقدیر، قضا و قدر نصیبب و قسمت، مقدر شدن، به سرنوشت شوم دچار کردن

حیاتی، مربوط به حیات وزندگی

در زیر چیزی لایه قرار دادن زمینه جیزی بودن

(=kerosine) نفت چراغ، نفت لامپا، نفت سفید

با ملایمت، بارامی، بتدریج

زمین لغزه، ریزش خاک کوه کنار جاده

ریزش، توپی، میله، تیکه کاغذ، ریختن، انداختن، پرت کردن

سخت، اکید، سخت گیر، یک دنده، محض، نص، صریح، محکم

مخالف، مغایر، ناسازگار مضر، روبرو

  • originally, to begin with, in the beginning, earlier
  • primarily

گیر افتادن و ساکن شدن درمکانی که دیگرامکان ترک آنجانباشد

دستخوش طوفان غوطه ور(روی اب)، آواره، بدون هدف، سرگردان شناور

وابسته‌ به‌ کلمبیاکه‌ کنایه‌ ازامریکااست‌

دسته الوار، شناور بر آب، دکل قایق مسطح الواری، با قایق الواری رفتن یا فرستادن

فرایازی، تصاعد، توالی، تسلسل، پیشرفت

سلطنت، حکمرانی، حکومت حکمفرمایی، سلطنت یا حکمرانی کردن، حکمفرما بودن

بدان وسیله، از ان راه به موجب ان در نتیجه

معرفی کردن، نشان دادن، باب کردن، مرسوم کردن، اشناکردن، مطرح کردن

حاضر، همه جا حاضر، موجود درهمه جا

بچگانه، ابتدایی، بچگی، مربوط بدوران کودکی

پیش رفتن، رهسپار شدن، حرکت کردن، اقدام کردن، پرداختن به، ناشی شدن از عایدات

خوابیده در میان چینه‌ها، قرار گرفته درون لایه ها

فاتح، غالب، پیروز، کشورگشا

ازدواج کرده، زن و شوهر ، وابسته به ازدواج، ازدواجی، زن و شوهری ، وابسته، بسیار علاقمند

اهل خشکی (در مقابل دریانورد)، کسیکه زندگی وشغلش در خشکی است

توانایی، زور، قدرت، نیرو انرژی

نمایش دادن، نمایاندن فهماندن، نمایندگی کردن وانمود کردن، بیان کردن نشان دادن، نماینده بودن

آهنگر، نعلبند

کج بیل، کج بیل زدن

رزمجو، جنگاور، سلحشور، محارب، جنگجو، مبارز، دلاور

بلندی، رفعت، ارتفاع، جای مرتفع، آسمان، عرش، منتهادرجه، تکبر، دربحبوحه (درجمع) ارتفاعات، عظمت

عمارت، ساختمان بزرگ مانند کلیسا

(n.)  پیرایه، زیور، زینت

v.) ) آراستن، تزئین کردن

Proxy (climate): climate proxies are preserved physical characteristics of the past that stand in for direct meteorological measurements and enable scientists to reconstruct the climatic conditions over a longer fraction of the Earth’s history.

take off, remove, dismiss

روستایی، دهاتی، دهقانی، کشاورز، رعیت

جدا کردن، بریدن، منفصل کردن

زمینه، مفاد، مفهوم، متن

پیمان، معاهده، قرار داد پیمان نامه، عهد نامه

جریب فرنگی (برابر با 43560 پای مربع و یا در حدود 4047 متر مربع) برای سنجش زمین، زمین

دستمال گردن، کراوات، بند گره، قید، الزام، علاقه، رابطه برابری، تساوی بستن، گره زدن، زدن

حکومت اشرافی، طبقه ء اشراف

تحویل دادن، تسلیم داشتن، دادن، منتقل کردن، آرائه دادن، ترجمه کردن، در آوردن

کردار، کار، قباله، سند با قباله واگذار کردن

انبوه، دسته، خوشه، ضربه سنگین، مشت، انبوه کردن

بالا رفتن یا بردن، بالا گرفتن، فزونی یافتن، تشدید کردن، شدید کردن یا شدن، زیاد شدن، رسیدن، از مهار خارج شدن، از دست رفتن

فجر، سپیده دم، طلوع، آغاز آغاز شدن

چنگ، قلاب، گلاویزی، دست بگریبانی، دست بگریبان شدن گلاویز شدن

نظر دادن بسته است.