General Circulation Models of the Atmosphere

If you are not redirected in 10 seconds, click HERE. This page contains the latter part of the text of a section of the Website The Discovery of Global Warming. This page was made only as a target for search engines that do not search the entirety of a long Web page.

The climate system is too complex for the human brain to grasp with simple insight. No scientist managed to devise a page of equations that explained the global atmosphere's operations. With the coming of digital computers in the 1950s, a small American team set out to model the atmosphere as an array of thousands of numbers. The work spread during the 1960s as computer modelers began to make decent short-range predictions of regional weather. Modeling long-term climate change for the entire planet, however, was held back by lack of computer power, ignorance of key processes such as cloud formation, inability to calculate the crucial ocean circulation, and insufficient data on the world's actual climate. By the mid 1970s, enough had been done to overcome these deficiencies so that Syukuro Manabe could make a quite convincing calculation. He reported that the Earth's average temperature should rise a few degrees if the level of carbon dioxide gas in the atmosphere doubled. This was confirmed in the following decade by increasingly realistic models. Skeptics dismissed them all, pointing to dubious technical features and the failure of models to match some kinds of data. By the late 1990s these problems were largely resolved, and most experts found the predictions of overall global warming plausible. Yet modelers could not be sure that the real climate, with features their equations still failed to represent, would not produce some big surprise. (Rudimentary physical models without extensive calculations are covered in a separate essay on Simple Models of Climate, and there is a supplementary essay for the Basic Radiation Calculations that became part of the technical foundation of comprehensive calculations.)


   SECTIONS REMOVED -- SEE MAIN PAGE Links to other end pages for search engines:
Abrupt Aerosol Biota Government
International Oceans Public Public2 Rapid

Ocean Circulation and Real Climates (1969-1988)

In the early 1980s, several groups pressed ahead toward more realistic models. They put in a reasonable facsimile of the Earth's actual geography, and replaced the wet "swamp" surface with an ocean that could exchange heat with the atmosphere. Thanks to increased computer power the models were now able to handle seasonal changes as a matter of course. It was also reassuring when Hansen's group and others got a decent match to the rise-fall-rise curve of global temperatures since the late 19th century, once they put in not only the rise of CO2 but also changes in emissions of volcanic dust and solar activity.

Adding a solar influence was a stretch, for nobody had figured out any plausible way that the superficial variations seen in numbers of sunspots could affect climate. To arbitrarily adjust the strength of the presumed solar influence in order to match the historical temperature curve was guesswork, dangerously close to fudging. But many scientists suspected there truly was a solar influence, and adding it did improve the match. Sometimes a scientist must "march with both feet in the air," assuming a couple of things at once in order to see whether it all eventually works out. (1) Reassured that they might be on the right track, in the 1980s climate modelers increasingly looked toward the future. When they introduced a doubled CO2 level into their improved models, they consistently found the same few degrees of warming. (2)

The skeptics were not persuaded. The Charney panel itself had pointed out that much more work was needed before models would be fully realistic. The treatment of clouds remained a central uncertainty. Another great unknown was the influence of the oceans. Back in 1979 the Charney panel had warned that the oceans' enormous capacity for soaking up heat could delay an atmospheric temperature rise for decades. Global warming might not become obvious until all the surface waters had warmed up, which would be too late to take timely precautions. (3) This time lag was not revealed by the existing GCMs, for these computed only equilibrium states. The models, lacking nearly all the necessary data and thwarted by formidable calculational problems, simply did not account for the true influence of the oceans.

Oceanographers were coming to realize that large amounts of energy were carried through the seas by a myriad of whorls of various types, from tiny convection swirls up to sluggish eddies a thousand kilometers wide. Calculating these whorls, like calculating all the world's individual clouds, was beyond the reach of the fastest computer. Again parameters had to be devised to summarize the main effects, only this time for entities that were far worse observed and understood than clouds. Modelers could only put in average numbers to represent the heat that they knew somehow moved vertically from layer to layer in the seas, and the energy somehow carried from warm latitudes toward the poles. They suspected that the actual behavior of the oceans might work out quite differently from their models. And even with the simplifications, to get anything halfway realistic required a vast number of computations, indeed more than for the atmosphere.

Manabe was keenly aware that if the Earth's future climate were ever to be predicted, it was "essential to construct a realistic model of the joint ocean-atmosphere system." (4) He shouldered the task in collaboration with Kirk Bryan, an oceanographer with meteorological training, who had been brought into the group back in 1961 to build a stand-alone numerical model of an ocean. The two got together to construct a computational system that coupled together their separate models. Manabe's winds and rain would help drive Bryan's ocean currents, while in return Bryan's sea-surface temperatures and evaporation would help drive the circulation of Manabe's atmosphere. At first they tried to divide the work: Manabe would handle matters from the ocean surface upward, while Bryan would take care of what lay below. But they found things just didn't work that way for studying a coupled system. They moved into one another's territory, aided by a friendly personal relationship.

Bryan and Manabe were the first to put together in one package approximate calculations for a wide variety of important features. They not only incorporated both oceans and atmosphere, but added into the bargain feedbacks from changes in sea ice. Moreover, they included a detailed scheme that represented, region by region, how moisture built up in the soil, or evaporated, or ran off in rivers to the sea.

Their big problem was that from a standing start it took several centuries of simulated time for an ocean model to settle into a realistic state. After all, that was how long it would take the surface currents of the real ocean to establish themselves from a random starting-point. The atmosphere, however, readjusts itself in a matter of weeks. After about 50,000 time steps of ten minutes each, Manabe's model atmosphere would approach equilibrium. The team could not conceivably afford the computer time to pace the oceans through decades in ten-minute steps. Their costly Univac 1108, a supercomputer by the standards of the time, needed 45 minutes to compute the atmosphere through a single day. Bryan's ocean could use longer time steps, say a hundred minutes, but the simulated currents would not even begin to settle down until millions of these steps had passed.

The key to their success was a neat trick for matching the different timescales. They ran their ocean model with its long time steps through twelve days. They ran the atmosphere model with its short time-steps through three hours. Then they coupled the atmosphere and ocean to exchange heat and moisture. Back to the ocean for another twelve days, and so forth. They left out seasons by using annual average sunlight to drive the system.

Manabe and Bryan were confident enough of their model to undertake a heroic computer run, some 1100 hours long (more than 12 full days of computer time devoted to the atmosphere and 33 to the ocean). In 1969, they published the results in an unusually short paper, as Manabe recalled long afterward--"and still I am very proud of it." (5)

Bryan wrote modestly at the time that "in one sense the... experiment is a failure." For even after a simulated century, the deep ocean circulation had not nearly reached equilibrium. It was not clear what the final climate solution would look like. (6) Yet it was a great success just to carry through a linked ocean-atmosphere computation that was at least starting to settle into equilibrium. The result looked like a real planet--not our Earth, for in place of geography there was only a radically simplified geometrical sketch, but in its way realistic. It was obviously only a first draft with many details wrong, yet there were ocean currents, trade winds, deserts, rain belts, and snow cover, all in roughly the right places. Unlike our actual Earth, so poorly observed, in the simulation one could see every detail of how air, water, and energy moved about.

Following up, in 1975 Manabe and Bryan published results from the first coupled ocean-atmosphere GCM that had a roughly Earth-like geography. Looking at their crude map, one could make out continents like North America and Australia, although not smaller features like Japan or Italy. The supercomputer ran for fifty straight days, simulating movements of air and sea over nearly three centuries. "The climate that emerges," they wrote, "includes some of the basic features of the actual climate. However, it has many unrealistic features." It still failed to show the full oceanic circulation. After all, the inputs had not been very realistic--for one thing, they had not put in the seasonal changes of sunlight. The results were getting close enough to reality to encourage them to push ahead. (7) By 1979, they had mobilized enough computer power to run their model through more than a millennium while incorporating seasons. (8)

Meanwhile a team headed by Warren Washington at NCAR in Colorado developed another ocean model, based on Bryan's, and coupled it to their own quite different GCM. Since they had begun with Bryan's ocean model it was not surprising that their results resembled Manabe and Bryan's, but it was still a gratifying confirmation. Again the patterns of air temperature, ocean salinity, and so forth came out roughly correct overall, albeit with noticeable deviations from the real planet, such as tropics that were too cold. As Washington's team admitted in 1980, the work "must be described as preliminary." (9) Through the 1980s, these and other teams continued to refine coupled models, occasionally checking how they reacted to increased levels of CO2. These were not so much attempts to predict the real climate as experiments to work out methods for doing so.

The results, for all their limitations, said something about the predictions of the atmosphere-only GCMs. As the Charney panel had pointed out, the oceans would delay the appearance of global warming for decades by soaking up heat. Hansen's group therefore warned in 1985 that a policy of "wait and see" might be wrongheaded. A mild temperature rise in the atmosphere might not become apparent until much worse greenhouse warming was inevitable. Also as expected, complex feedbacks showed up in the ocean circulation, influencing just how the weather would change in a given region. (10) Aside from that, including a somewhat realistic ocean did not turn up anything that would alter the basic prediction of future warming. Once again it was found that simple models had pointed in the right direction. By 1988, Hansen had enough confidence in his model to issue strong public pronouncements about the imminent threat of global warming. (11)

A few of the calculations showed a disturbing new feature--a possibility that the ocean circulation was fragile. Signs of rapid past changes in circulation had been showing up in ice cores and other evidence that had set oceanographers to speculating. In 1985, Bryan and a collaborator tried out a coupled atmosphere-ocean model with a CO2 level four times higher than at present. They found signs that the world-spanning "thermohaline" circulation, where differences in heat and salinity drove a vast overturning of sea water in the North Atlantic, could come to a halt. Three years later Manabe and another collaborator produced a simulation in which, even at present CO2 levels, the ocean-atmosphere system could settle down in one of two states--the present one, or a state without the overturning. (12) Some experts worried that global warming might indeed shut down the circulation. Halting the steady flow of warm water into the North Atlantic would bring devastating climate changes in Europe and perhaps beyond.

As oceanographer Wallace Broecker remarked, the GCMs had been designed to come to equilibrium, giving an illusion of stability. Only now, as scientists got better at modeling ocean-atmosphere interactions, might they find that the climate system was liable to switch rapidly from one state to another. Acknowledging the criticism, a few modelers began to undertake the protracted computer runs that were necessary to show the real effects of raising CO2. Instead of separately computing "before" and "after" states, they computed the entire "transient response," plodding month after month through a century or more. (13) This was pushing the state of the art to its limit, however. Most model groups could barely handle the huge difficulties of constructing three-dimensional models of both ocean circulation and atmospheric circulation, let alone link the two together and run the combination through a century or so.

The climate changes that different GCMs computed for doubled CO2, reviewers noted in 1987, "show many quantitative and even qualitative differences; thus we know that not all of these simulations can be correct, and perhaps all may be wrong." (14) Skeptics pointed out that GCMs were unable to represent even the present climate successfully from first principles. Anything slightly unrealistic in the initial data or equations could be amplified a little at each step, and after thousands of steps the entire result usually veered off into something impossible. To get around this, the modelers had kept one eye over their shoulder at the real world. They adjusted various parameters (for example, the numbers describing cloud physics), "tuning" the models and running them again and again until the results looked like the real climate. These adjustments were not calculated from physical principles, but were fiddled until the model became stable. It was possible to get a crude climate representation without the tuning, but the best simulations relied on this back-and-forth between model and observation. If models were tuned to match current climate, the critics asked, how reliably could they calculate a future, different state?

One way to check that was to see whether models could make a reasonable facsimile of the Earth during a glacial period--virtually a different planet. If you could reproduce a glacial climate with the same physical parameters for clouds and so forth that you used for the current planet, that would be evidence the models were not arbitrarily trimmed just to reproduce the present. But first you would need to know what the conditions had actually been around the world during an ice age. That required far more data than paleoclimatologists had turned up. Already in 1968 a meteorologist warned that henceforth reconstructing past climate would not be limited by theory so much as by "the difficulty of establishing the history of paleoenvironment." Until data and models were developed together, he said, atmospheric scientists could only gaze upon the ice ages with "a helpless feeling of wonderment." (15)

To meet the need, a group of oceanographers persuaded the U.S. government to fund a large-scale project to analyze ooze extracted from the sea bottom at numerous locations. The results, combined with terrestrial data from fossil pollen and other evidence, gave a world map of temperatures at the peak of the last ice age. As soon as this CLIMAP project began publishing its results in 1976, modelers began trying to make a representation for comparison. The first attempts showed only a very rough agreement, although good enough to reproduce essential features such as the important role played by the reflection of sunlight from ice. (16)

At first the modelers simply worked to reproduce the ice age climate over land by using the CLIMAP figures for sea surface temperatures. But when they tried to push on and use models to calculate the sea surface temperatures, they ran into trouble. The CLIMAP team had reported that in the middle of the last ice age, tropical seas had been only slightly cooler than at present, a difference of barely 1C. That raised doubts about whether the climate was as sensitive to external forces (like greenhouse gases) as the modelers thought. Moreover, while the tropical seas had stayed warm during the last ice age, the air at high elevations had certainly been far colder. That was evident in lower altitudes of former snowlines detected by geologists on the mountains of New Guinea and Hawaii. No matter how much the GCMs were fiddled, they could not be persuaded to show such a large difference of temperature with altitude. A few modelers contended that the tropical sea temperatures must have varied more than CLIMAP said. But they were up against an old and strongly held scientific conviction that the lush equatorial jungles had changed little over millions of years, testifying to a stable climate. (This was an echo of traditional ideas that the entire planet's climate was fundamentally stable, with ice ages no more than regional perturbations at high latitudes and elevations.) (17)

On the other hand, by 1988 modelers had passed a less severe test. Some 8,000 years ago the world had gone through a warm period--presumably like the climate that the greenhouse effect was pushing us toward. One modeling group managed to compute a fairly good reproduction of the temperature, winds, and moisture in that period. (18)

Meanwhile all the main models had been developed to a point where they could reliably reproduce the enormously different climates of summer and winter. That was a main reason why a review panel of experts concluded in 1985 that "theoretical understanding provides a firm basis" for predictions of several degrees of warming in the next century. (19) So why did the models fail to match the relatively mild sea-surface temperatures along with cold mountains reported for the tropics in the previous ice age? Experts could only say that the discrepancies "constitute an enigma." (20)

A more obvious and annoying problem was the way models failed to tell how global warming would affect a particular region. Policy-makers and the public were less interested in the planet as a whole than in how much warmer their own particular locality would get, and whether to expect wetter or dryer conditions. Already in 1979, the Charney panel's report had singled out the absence of local climate predictions as a weakness. At that time the modelers who attacked climate change had only tried to make predictions averaged over entire zones of latitude. They might calculate a geographically realistic model through a seasonal cycle, but nobody had the computer power to drive one through centuries. In the mid 1970s, when Manabe and Wetherald had introduced a highly simplified geography that divided the globe into land and ocean segments without mountains, they had found, not surprisingly, that the model climate's response to a raised CO2 level was "far from uniform geographically." (21)

During the 1980s, modelers got enough computer power to introduce much more realistic geography into their climate change calculations. They began to grind out maps in which our planet's continents could be recognized, showing climate region by region in a world with doubled CO2. However, for many important regions the maps printed out by different groups turned out to be incompatible. Where one model predicted more rainfall in the greenhouse future, another might predict less. That was hardly surprising, for a region's climate depended on particulars like the runoff of water from its type of soil, or the way a forest grew darker as snow melted. Modelers were far from pinning down such details precisely. A simulation of the present climate was considered excellent if its average temperature for a given region was off by only a few degrees and its rainfall was not too high or too low by more than 50% or so. On the positive side, the GCMs mostly did agree fairly well on global average predictions. But the large differences in regional predictions emboldened skeptics who cast doubt on the models' fundamental validity. (22)

A variety of other criticisms were voiced. The most prominent came from Sherwood Idso. In 1986 he calculated that for the known increase of CO2 since the start of the century, models should predict something like 3C of warming, which was far more than what had been observed. Idso insisted that something must be badly wrong with the models' sensitivity, that is, their response to changes in conditions. (23) Other scientists gave little heed to the claim. It was only an extension of a long and sometimes bitter controversy in which they had debated Idso's arguments and rejected them as shamefully oversimplified.

Setting Idso's criticisms aside, there undeniably remained points where the models stood on shaky foundations. Researchers who studied clouds, the transfer of radiation through the atmosphere and other physical features warned that more work was needed before the fundamental physics of GCMs would be entirely sound. For some features, no calculation could be trusted until more observations were made. And even when the physics was well understood, it was no simple task to represent it properly in the computations. "The challenges to be overcome through the use of mathematical models are daunting," a modeler remarked, "requiring the efforts of dedicated teams working a decade or more on individual aspects of the climate system." (24) As Manabe regretfully explained, so much physics was involved in every raindrop that it would never be possible to compute absolutely everything. "And even if you have a perfect model which mimics the climate system, you don't know it, and you have no way of proving it." (25)

Indeed philosophers of science explained to anyone who would listen that a computer model, like any other embodiment of a set of scientific hypotheses, could never be "proved" in the absolute sense one could prove a mathematical theorem. What models could do was help people sort through countless ideas and possibilities, offering evidence on which were most plausible. Eventually the models, along with other evidence and other lines of reasoning, might converge on an explanation of climate which--if necessarily imperfect, like all human knowledge--could be highly reliable. (26)

Through the 1980s and beyond, however, different models persisted in coming up with noticeably different numbers for climate in one place or another. These qualitative divergences could be significant for anyone trying to plan for change, at least when one looked closely at particular phenomena such as the projected changes in soil moisture for a given region. Worse, some groups suspected that even apparently correct results were sometimes generated for the wrong reasons. Above all, they recognized that their modeling of cloud formation was still scarcely justified by the little that was known about cloud physics. Even the actual cloudiness of various regions of the world had been measured in only a sketchy fashion. (Until satellite measurements became available later in the decade, the best set of data only gave averages by zones for the Northern Hemisphere. Modelers mirrored the set to represent clouds in the Southern Hemisphere, with the seasons reversed--although of course the distribution of land, sea, and ice is very different in the two halves of the planet.) Having come this far, many modelers felt a need to step back from the global calculations. Reliable progress would require more work on fundamental elements, to improve the sub-models that represented clouds, snow, vegetation, and so forth. (27) Modelers settled into a long grind of piecemeal improvements.

After 1988

"There has been little change over the last 20 years or so in the approaches of the various modeling groups," an observer remarked in 1989. He thought this was partly due to a tendency "to fixate on specific aspects of the total problem," and partly to limited resources. "The modeling groups that are looking at the climate change process," he noted, "are relatively small in size compared to the large task." (28) The limitations not only in resources but in computer power, global data, and plain scientific understanding kept the groups far from their goal of precisely reproducing all the features of climate. Yet under any circumstances it would be impossible to compute the current climate perfectly, given the amount of sheer randomness in weather systems. Modelers nevertheless felt they now had a basic grasp of the main forces and variations in the atmosphere. Their interest was shifting from representing the current climate ever more precisely to studies of long-term climate change.

The research front accordingly moved from atmospheric models to coupled ocean-atmosphere models, and from calculating stable systems to representing the "transient response" to changes in conditions. Running models under different conditions, sometimes through simulated centuries, with rising confidence the teams drew rough sketches of how climate could be altered by various influences--and especially by changes in greenhouse gases. They were now reasonably sure that they knew enough to issue clear warnings of future global warming to the world's governments. (29)

As GCMs incorporated ever more complexities, modelers needed to work ever more closely with one another and with people in outside specialties. Communities of collaboration among experts had been rapidly expanding throughout geophysics and the other sciences, but perhaps nowhere so obviously as in climate modeling. The clearest case centered around NCAR. It lived up to its name of a "National Center" (in fact an international center) by developing what was explicitly a "Community Climate Model." The first version used pieces drawn from the work of an Australian group, and the European Centre for Medium-Range Weather Forecasts, and several others. In 1983 NCAR published all its computer source codes along with a "Users' Guide" so that outside groups could run the model on their own machines. The various outside experiments and modifications in return informed the NCAR group. Subsequent versions of the Community Climate Model, published in 1987, 1992, and so on, incorporated many basic changes and additional features--for example, the Manabe group's scheme for handling the way rainfall was absorbed, evaporated, or ran off in rivers. [The version released in 2004 was called the Third Community Climate System Model, CCSM3, reflecting the ever increasing complexity.] NCAR had an exceptionally strong institutional commitment to building a model that could be run on a variety of computer platforms, but in other ways their work was not unusual. By now most models used contributions from so many different sources that they were all in a sense "community" models. (30)

The effort was no longer dominated by American groups. At the Hadley Centre for Climate Prediction and Research in the United Kingdom and the Max Planck Institute for Meteorology in Germany, in particular, groups were starting to produce pathbreaking model runs. By the mid 1990s, some modelers in the United States feared they were falling behind. One reason was that the U.S. government forbade them from buying foreign supercomputers, a technology where Japan had seized the lead. National rivalries are normal where groups compete to be first with the best results, but competition did not obstruct the collaborative flow of ideas.

An important example of massive collaboration was a 1989 study involving groups in the United States, Canada, England, France, Germany, China, and Japan. Taking 14 models of varying complexity, the groups fed each the same external forces (using a change in sea surface temperature as a surrogate for climate change), and compared the results. The simulated climates agreed well for clear skies. But "when cloud feedback was included, compatibility vanished." The models varied by as much as a factor of three in their sensitivity to the external forces, disagreeing in particular on how far a given increase of CO2 would raise the temperature. (31) A few respected meteorologists concluded that the modelers' representation of clouds was altogether useless.

Three years later, another comparison of GCMs constructed by groups in eight different nations found that in some respects they all erred in the same direction. Most noticeably, they all got the present tropics a bit too cold. It seemed that "all models suffer from a common deficiency in some aspect of their formulation," some hidden failure to understand or perhaps even to include some mechanisms. (32) On top of this came evidence that the world's clouds would probably change as human activity added dust, chemical haze, and other aerosols to the atmosphere. "From a climate modeling perspective these results are discouraging," one expert remarked. Up to this point clouds had been treated simply in terms of moisture, and now aerosols were adding "an additional degree of complication." (33)

Most experts nevertheless felt the GCMs were on the right track. In the multi-model comparisons, all the results were at least in rough overall agreement with reality. A test that compared four of the best GCMs found them all pretty close to the observed temperatures and precipitations for much of the Earth's land surface. (34) Such studies were helped greatly by a new capability to set their results against a uniform body of world-wide data. Specially designed satellite instruments were at last monitoring incoming and outgoing radiation, cloud cover, and other essential parameters. It was now evident, in particular, where clouds brought warming and where they made for cooling. Overall, it turned out that clouds tended to cool the planet--strongly enough so that small changes in cloudiness would have a serious feedback on climate. (35)

There was also progress in building aerosols into climate models. When Mount Pinatubo erupted in the Philippines in June 1991, sharply increasing the amount of sulfuric acid haze in the stratosphere world-wide, Hansen's group declared that "this volcano will provide an acid test for global climate models." Running their model with the new data, they predicted a noticeable cooling for the next couple of years. (36) By 1995 their predictions for different levels of the atmosphere were seen to be on the mark. "The correlations between the predictions and the independent analyses [of temperatures]," a reviewer observed, "are highly significant and very striking." The ability of modelers to reproduce Pinatubo's effects was a particularly strong reason for confidence that the GCMs were sound. (37)

Incorporating aerosols into GCMs improved the agreement with observations, helping to answer a major criticism. Typical GCMs had a climate sensitivity that predicted about 3C warming for a doubling of CO2. However, as Idso and others pointed out, the actual rise in temperature over the century had not kept pace with the rise of the gas. An answer came from models that put in the increase of aerosols. The aerosols' cooling effect, it became clear, had tended to offset the greenhouse warming. By now computer power was so great that modeling groups could go beyond steady-state pictures and explore through time. In 1995, models at three centers (the Lawrence Livermore National Laboratory in California, the Hadley Centre, and the Max Planck Institute) all reproduced the overall trend of 20th-century temperature changes and even the observed geographical pattern. Still more convincing, the correspondences between models and data had increased toward the end of the century as the greenhouse gas levels rose. (38)

This GCM work powerfully influenced the Intergovernmental Panel on Climate Change, appointed by the world's governments. In its reports the IPCC noted that the pattern of geographical and vertical distribution of atmospheric heating that the models computed for greenhouse warming was different from the pattern that other influences alone would produce. The similarity between the computed greenhouse effect's "signature" and the actual record of recent decades backed up the panel's official conclusion--a human influence on climate had probably been detected. (39)

Reproducing the present climate, however, did not guarantee that GCMs could reliably predict a quite different climate. The first hurdle was to reproduce the peculiar rise-fall-rise of temperature since the late 19th century. Now models could do that, by putting in the history of aerosol and solar variations along with greenhouse gases. There were new data and theories arguing that it was not fudging to put in solar variations, which really could influence climate. The models also had gotten good at the tricky task of mimicking the normal excursions of climate. On some fraction of runs, irregular patterns like the historical rise-fall-rise happened just by chance.

Confidence rose further in the late 1990s when the modelers' failure to match the CLIMAP data on ice-age temperatures was resolved. The breakthrough came when a team under Lonnie Thompson of the Polar Research Center at Ohio State University struggled onto a high-altitude glacier in the tropical Andes. They managed to drill out a core that recorded atmospheric conditions back into the last ice age. The results, they announced, "challenge the current view of tropical climate history..." (40) It was not the computer models that had been unreliable--it was the oceanographers' complex manipulation of their data as they sought numbers for tropical sea-surface temperatures. A variety of other new types of climate measures agreed that tropical ice age waters had turned significantly colder, by perhaps 3C or more. That was roughly what the GCMs had calculated.

Debate continued, as some defended the original CLIMAP estimates with other types of data. Moreover, the primitive ice-age GCMs required special adjustments and were not fully comparable with the ocean-coupled simulations of the present climate. But there was no longer a flat contradiction with the modelers, who could now feel more secure in the way their models responded to things like the reflection of sunlight from ice and snow. The discovery that the tropical oceans had felt the most recent ice age put the last nail in the coffin of the traditional view of a planet where some regions, at least, maintained a stable climate. (41)

Another persistent problem was the poor quality of models that included the oceans. Modelers had long understood that reliable climate prediction would require coupling an atmospheric GCM to a full-scale ocean model. Such coupled models were beginning to dominate attention. They all tended to drift over time into unrealistic patterns. In particular, models seemed flatly unable to keep the thermohaline circulation going. The only solution was to tune the models to match real-world conditions by adjusting various parameters. The simplest method, used for instance by Suki Manabe in his influential global warming computations, was to fiddle with the flux of heat at the interface between ocean and atmosphere. As the model began to drift away from reality, it was telling him (as he explained), "Oh, Suki, I need this much heat here." And he would put heat into the ocean or take it away as needed to keep the results stable. Modelers would likewise force transfers of water and so forth, formally violating the laws of physics to compensate for their models' deficiencies.

The workers who used this technique argued that it was fair play for finding the effects of greenhouse gases, so long as they imposed the same numbers when they ran their model for different gas levels. Some other modelers roundly criticized flux adjustments as "fudge factors" that could bring whatever results the modelers sought. A few scientists who were skeptical about global warming brought the criticism into public view, arguing that GCMs were too faulty to prove that action must be taken on greenhouse gases. If a model was tuned to match the present climate, why believe it could tell us anything at all about a changed situation? The argument was too technical to attract much public attention. The modelers themselves, reluctant to give ammunition to critics of their enterprise, mostly carried on the debate privately with their colleagues.

Around 1998, different groups published consistent simulations of the ice age climate based on the full armament of coupled ocean-atmosphere models. This was plainly a landmark, showing that the models were not so elaborately adjusted that they could work only for a climate resembling the present one. The work called for a variety of ingenious methods, along with brute force--one group ran its model on a supercomputer for over a year. (42) Better still, around the same time a couple of computer groups simulating the present climate managed to do away altogether with flux adjustments while running their models through centuries. Their results had reasonable seasonal cycles and so forth, not severely different from the results of the earlier flux-adjusted models. Evidently the tuning had not been a fatal cheat. (43)

Another positive note was the plausible representation of middle-scale phenomena such as the El Niño-Southern Oscillation. This irregular cycle of wind patterns and water movement in the tropical Pacific Ocean became a target for modelers once it was found to affect weather powerfully around the globe. Such mid-sized models, constructed by groups nearly independent of the GCM researchers, offered an opportunity to work out and test solutions to tricky problems like the interaction between winds and waves. By the late 1990s, specially designed regional models showed some success in predicting El Niño cycles.

Meanwhile other groups confronted the problem of the North Atlantic thermohaline circulation, spurred by evidence from ice and ocean-bed cores of drastic shifts during glacial periods. By the turn of the century modelers had produced convincing simulations of these past changes. (44) Manabe's group looked to see if something like that could happen in the future. Their preliminary work in the 1980s had aimed at steady-state models, which were a necessary first step, but unable by their very nature to see changes in the oceans. Now the group had enough computer power to follow the system as it evolved, plugging in a steady increase of atmospheric CO2 level. They found that sometime in the next few centuries, global warming could seriously weaken the ocean circulation. (45)

Progress in handling the oceans underpinned striking successes in simulating a wide variety of changes. Modelers had now pretty well reproduced not only simple geographical and seasonal averages from July to December and back, but also the spectrum of random regional and annual fluctuations in the averages--indeed it was now a test of a good model that a series of runs showed a variability similar to the real weather. Modelers had followed the climate through time, matching the 20th-century temperature record. Exploring unusual conditions, modelers had reproduced the effects of a major volcanic eruption, and even the ice ages. All this raised confidence that climate models could not be too far wrong in their disturbing predictions of future transformations. Plugging in a standard 1% per year rise in greenhouse gases and calculating through the next century, an ever larger number of modeling groups with ever more sophisticated models all found a significant temperature rise. (46)

Yet the models were far from proven beyond question. The most noticeable defect was that when it came to representing the present climate, models that coupled atmosphere to oceans were notably inferior to plain atmosphere-only GCMs. That was no wonder, since arbitrary assumptions remained. For example, oceanographers had not solved the mystery of how heat is transported up or down from layer to layer of sea water. The modelers relied on primitive average parameterizations, which new observations cast into doubt.

The deficiencies were not severe enough to prevent several groups from reproducing all the chief features of the atmosphere-ocean interaction. In particular, in 2001 two groups using coupled models matched the rise of temperature that had been detected in the upper layers of the world's oceans. They got a good match only by putting in the rise of greenhouse gases. Yet if modelers now understood how the climate system could change and even how it had changed, they were far from saying precisely how it would change in future. Only models that incorporated a much more realistic ocean would be able to predict, beyond the general global warming, the evolution of climate region by region over the coming decades. [By 2005, computer modelers had advanced far enough to declare that the temperature measurements gave an unequivocal "signature" of the greenhouse effect. The pattern of warming in different ocean basins neatly matched what models predicted would arise, after some delay, from the additional solar energy input due to humanity's emissions into the atmosphere. Nothing else could produce such a warming pattern.] (47)

Looking farther afield, the future climate system could not be determined very accurately until ocean-atmosphere GCMs were linked interactively with models for changes in vegetation. Dark forests and bright deserts not only responded to climate, but influenced it. Since the early 1990s the more advanced numerical models, for weather prediction as well as climate, had incorporated descriptions of such things as the way plants take up water through their roots and evaporate it into the atmosphere. Changes in the chemistry of the atmosphere also had to be incorporated, for these influenced cloud formation and more. All these complex interactions were tough to model.

Meanwhile, vegetation and atmospheric chemistry needed to be linked with still more problematic models--ones that projected future human social and economic life. Over longer time scales, modelers would also need to consider changes in ocean chemistry, ice sheets, ecosystems, and so forth. When people talked now of a "GCM" they no longer meant a "General Circulation Model," built from the traditional equations for weather. "GCM" now stood for "Global Climate Model" or even "Global Coupled Model," incorporating many things besides the circulation of the atmosphere.

As for modeling the atmosphere itself, experts still had plenty of work to do. The range of modelers' predictions of global warming for a doubling of CO2 remained broad, anywhere between roughly 1.5 and 4.5C. The ineradicable uncertainty was caused largely by ignorance of what would happen to clouds as the world warmed. Much was still unknown about how aerosols helped to form clouds, what kinds of clouds would form, and how the various kinds of clouds would interact with radiation. That problem came to the fore in 1995, when a controversy was triggered by studies suggesting that clouds absorbed much more radiation than modelers had thought. Through the preceding decade, modelers had adjusted their calculations to remove certain anomalies in the data, on the assumption that the data were unreliable. Now careful measurement programs indicated that the anomalies could not be dismissed so easily. As one participant in the controversy warned, "both theory and observation of the absorption of solar radiation in clouds are still fraught with uncertainties." (48)

As the 21st century began, experts continued to think of new subtleties in the physics of clouds which might significantly affect the models' predictions. (49) Struggling with the swarm of technical controversies, experts could not even say whether cloud feedbacks would tend to hold back global warming, or hasten it. The uncertainties remained the main reason different GCMs gave different predictions for future warming. (50) It was also disturbing that modelers had trouble calculating a correct picture of the temperature structure of the atmosphere. While global warming was observed at the Earth's surface, satellite measurements showed essentially no warming at middle levels of the atmosphere, which was not what the models had predicted. At a minimum this suggested that the models needed to be revised to better account for volcanic eruptions, ozone, and other factors that had effects at particular altitudes. [It was a relief when a new analysis of the satellite measurements in 2004 showed that in fact the mid-level atmosphere had been warming up in the way the models predicted.]

Critics were quick to point out the flaws in models as publicly as possible. It was not just that there were questionable assumptions in the equations. There were also undeniable problems in the basic physical data that the models relied on, and uncertainties in the way the data were manipulated to fit things together. The models still needed special adjustments to get plausible ice ages. And when modelers tried to simulate the climate of the Cretaceous epoch—a super-greenhouse period a hundred millions years ago that had been far warmer than the present and with a CO2 level several times higher—the results were far from the climate pattern geologists reported.

But for a climate more or less like the present, the factors that modelers failed to understand seemed to have gotten incorporated somehow into the parameters. For the models did produce reasonable weather patterns for such different conditions as day and night, summer and winter, the effects of volcanic eruptions and so forth. At worst, the models were somehow all getting right results for wrong reasons—flaws which would only show up after greenhouse gases pushed the climate beyond any conditions that the models were designed to reproduce. If there were such deep-set flaws, that did not mean, as some critics implied, that there was no need to worry about global warming. As other experts pointed out, if the models were faulty, the future climate changes could be worse than they predicted, not better.

The skeptics could not reasonably dismiss computer modeling in general. That would throw away much of the past few decades’ work in many fields of science, and even key business practices. The challenge, then, was to produce a simulation that did not show global warming. It should have been easy to try, for by the 1990s, millions of people owned personal computers more powerful than anything commanded by the climate modelers of the 1970s. But no matter how people fiddled with climate models, whether simple one- or two-dimensional models or full-scale GCMs, if they could reproduce something resembling the present climate, and then added some greenhouse gases, the model always displayed a severe risk of future global warming. [My personal computer is running a real climate model in its idle minutes. You can join the many helping this important experiment: visit ]

The modelers had reached a point where they could confidently declare what was reasonably likely to happen. They did not claim they would ever be able to say what would certainly happen. Most obviously, different GCMs stubbornly continued to give different predictions for particular regions. Some things looked quite certain, like higher temperatures in the Arctic (hardly a prediction now, for such warming was becoming blatantly visible in the weather data). But for many of the Earth's populated places, the models could not reliably tell the local governments whether to brace themselves for more droughts, more floods, or neither or both.

For overall global warming, however, the ocean-atmosphere GCMs at various centers were converging on similar predictions. The decades of work by teams of specialists, backed up by immense improvements in computers and data, had gradually built up confidence. It was largely thanks to their work that, as the editor of Science magazine announced in 2001, a "consensus as strong as the one that has developed around this topic is rare in the history of science." (51) Nearly every expert now agreed that the old predictions were solid--raising the CO2 level was all but certain to warm the globe. Doubling the level would most likely raise the average temperature around 3C, give or take a degree or two. The consequences of such a warming were also predictable. Sea levels would certainly rise. And the weather would certainly change, probably toward an intensified cycle of storms, floods, and droughts. (52) The greatest uncertainty now was no longer in how to calculate the effects of the greenhouse gases and aerosols that humanity poured into the atmosphere. The greatest unknown for the coming century was how much of this pollution we would decide to emit.

Impacts: The consequences of such a warming were predictable. A variety of different computer models agreed with one another, and also agreed with studies of eras in the distant geological past that had been warmer than the present climate. Indeed many of the predicted changes already seemed to be underway. Here are the likely consequences of warming by a few degrees Celsius — that is, what scientists expect if humanity manages to restrain its emissions within the next few decades, so that greenhouse gases do not rise beyond twice the level maintained for the past few million years

* Most places will continue to get warmer, especially at night and winter. Heat waves will probably continue to get worse, killing vulnerable people.

* Sea levels will continue to rise for centuries. (The last time the planet had been so warm, the level had been roughly 5 meters higher, submerging coastlines where many millions of people now live.) Although the rise is gradual, storm surges will cause emergencies.

* Weather patterns will keep changing, probably toward an intensified water cycle with stronger storms, floods and droughts. Generally speaking, regions already dry are expected to get drier, and wet regions wetter. In flood-prone regions, whether wet or dry, stronger storms are liable to bring worse flooding. Ice fields and winter snowpack will shrink, jeopardizing water supply systems in some regions. There is evidence that all these things have already begun to happen.

* Agricultural systems and ecosystems will be stressed, although some will temporarily benefit. Uncounted valuable species are at risk of extinction, especially in the Arctic, mountain areas, and coral reefs. Tropical diseases will probably spread to warmed regions. Increased CO2 levels will also affect biological systems independent of climate change. Some crops will be fertilized, as will some invasive weeds. The oceans are becoming more acidic, which endangers coral and much other important marine life.(108)For current GCMs see:

Hadley Centre:





Simple Models of Climate ...Ocean Currents and Climate ...Aerosol Hazes


Basic Radiation Calculations ... Chaos in the Atmosphere

Reflections on the Scientific Process ... Arakawa's Computation Device

1. Hansen et al. (1981); for details of the model, see Hansen et al. (1983). I heard "march with both feet in the air" from physicist Jim Faller, my thesis adviser.

2. Doubling: e.g., Manabe and Stouffer (1980); additional landmarks: Washington and Meehl (1984); Hansen et al. (1984); Wilson and Mitchell (1987). All three used a "slab" ocean 50m or so deep to store heat seasonally, and all got 3-5C warming for doubled CO2.

3. National Academy of Sciences (1979), p. 2.

4. Manabe et al. (1979), p. 394.

5. Manabe, interview by P. Edwards, March 14, 1998. The time steps were explained in a communication to me by Manabe, 2001. The short paper is Manabe and Bryan (1969); details are in Manabe (1969); Bryan (1969a).

6. Bryan (1969a), p. 822.

7. Manabe et al. (1975); Bryan et al. (1975); all this is reviewed in Manabe (1997).

8. Manabe et al. (1979).

9. Washington et al. (1980), quote p. 1887.

10. Hoffert et al. (1980); Schlesinger et al. (1985); Harvey and Schneider (1985); "yet to be realized warming calls into question a policy of 'wait and see'," Hansen et al. (1985); ocean delay also figured in Hansen et al. (1981); see discussion in Hansen et al. (2000a), pp. 139-40.

11. Hansen et al. (1988).

12. Bryan and Spelman (1985); Manabe and Stouffer (1988).

13. Broecker (1987a), p. 123.

14. Schlesinger and Mitchell (1987), p. 795.

15. Mitchell (1968), p. iii.

16. Gates (1976a); Gates (1976b); another attempt (citing the motivation as seeking an understanding of ice ages, not checking model validity): Manabe and Hahn (1977).

17. The pioneering indicator of variable tropical seas was coral studies by Fairbanks, starting with Fairbanks and Matthews (1978); snowlines: e.g., Webster and Streten (1978); Porter (1979); for more bibliography, see Broecker (1995), pp. 276-77; inability of models to fit: noted e.g., in Hansen et al. (1984), p. 145 who blame it on bad CLIMAP data; see discussion in Rind and Peteet (1985); Manabe did feel that ice age models came close enough overall to give "some additional confidence" that the prediction of future global warming "may not be too far from reality." Manabe and Broccoli (1985), p. 2650. There were also disagreements about the extent of continental ice sheets and sea ice.

18. COHMAP (1988); also quite successful was Kutzbach and Guetter (1984).

19. MacCracken and Luther (1985b), p. xxiv.

20. "enigma:" Broecker and Denton (1989), p. 2468.

21. Manabe and Wetherald (1980), p. 99.

22. MacCracken and Luther (1985b), see pp. 266-67; Mitchell et al. (1987); Grotch (1988).

23. Idso (1986); Idso (1987).

24. E.g., "discouraging... deficiencies" are noted and improvements suggested by Ramanathan et al. (1983), see p. 606; one review of complexities and data deficiencies is Kondratyev (1988), pp. 52-62, see p. 60; "challenges":Mahlman (1998), p. 84.

25. Manabe, interview by Weart, Dec. 1989.

26. Oreskes et al. (1994); Norton and Suppe (2001).

27. Schlesinger and Mitchell (1987); McGuffie and Henderson-Sellers (1997), p. 55, and my thanks to Dr. McGuffie for personal communications.

28. Dickinson (1989), p. 101-02.

29. The 1990 Intergovernmental Panel on Climate Change report drew especially on the Goddard Institute model, Hansen et al. (1988).

30. A brief history is in Kiehl et al. (1996), pp. 1-2, available at; see also Anthes (1986), p. 194.

31. Cess et al. (1989); Cess et al. (1990) (signed by 32 authors).

32. Boer et al. (1992), quote p. 12,774.

33. Albrecht (1989), p. 1230.

34. Kalkstein (1991); as cited in Rosenzweig and Hillel (1998).

35. Purdom and Menzel (1996), pp. 124-25; cloudiness and radiation budget: Ramanathan et al. (1989b); see also Ramanathan et al. (1989a).

36. Hansen et al. (1992), p. 218. The paper was submitted in Oct. 1991.

37. Carson (1999), p. 10; ex. of later work: Soden et al. (2002).

38. Mitchell et al. (1995); similarity increasing with time: Santer et al. (1996).

39. The 1990 report drew especially on the Goddard Institute model, viz., Hansen et al. (1988); the Hadley model with its correction for aerosols was particularly influential in the 1995 report according to Kerr (1995a); Carson (1999); "The probability is very low that these correspondences could occur by chance as a result of natural internal variability only." IPCC (1996), p. 22, see ch. 8; on problems of detecting regional variations, see Schneider (1994). The "signature" or "fingerprint" method was pioneered by Klaus Hasselmann's group at the Max Planck Institute, e.g., Hasselmann (1993).

40. Thompson et al. (1995), quote p. 50; see Krajick (2002).

41. A similar issue was a mismatch between GCMs and geological reconstructions of tropical ocean temperatures during warm periods in the distant past, which was likewise resolved (at least in part) in favor of the models, see Pearson et al. (2001); the sensitivity of tropical climate was adumbrated in 1985 by a Peruvian ice core that showed shifts in the past thousand years, Thompson et al. (1985); new data: especially Mg in forams, Hastings et al. (1998); see Bard (1999); Lee and Slowey (1999); for the debate, Bradley (1999), pp. 223-26; see also discussion in IPCC (2001), pp. 495-96.

42. These results helped convince me personally that there was unfortunately little chance that global warming was a mirage. "Landmark": Rahmstorf (2002), p. 209, with refs.

43. Manabe, interview by Paul Edwards, March 15, 1998, AIP. Carson (1999), pp. 13-17 (for Hadley Centre model of J.M. Gregory and J.F.B. Mitchell); Kerr (1997a) (for NCAR model of W.M. Washington and G.A. Meehl); Shackley et al. (1999).

44. E.g., Ganopolski and Rahmstorf (2001).

45. Manabe and Stouffer (1993).

46. Ice ages without flux adjustments, e.g., Khodri et al. (2001).

47. Levitus et al. (2001); Barnett et al. (2001) (with no flux adjustments). ; Hansen et al. (2005) found that "Earth is now absorbing 0.85 +/- 0.15 Watts per square meter more energy from the Sun than it is emitting to space," an imbalance bound to produce severe effects.

48. "fraught:" Li et al. (1995); for background and further references on "anomalous absorption," see Ramanathan and Vogelman (1997); IPCC (2001), pp. 432-33.

49. E.g., Lindzen et al. (2001).

50. IPCC (2001), pp. 427-31.

51. Kennedy (2001). Presumably he meant topics of comparable complexity, not simple and universally accepted theories such as relativity or evolution.

52. For an overall review I have used Grassl (2000); whether the hydrological cycle will intensify is still debated, see Ohmura and Wild (2002).