Global Warming Contrarians Part 6: Global Cooling in the Mid-20th Century

Here, at last, is the much-delayed sixth instalment in my ongoing series of posts on the (mostly bad) arguments that contrarians make against global warming. (A list of earlier parts in the series can be found here.)

In this post, I want to clear out of the way one more factoid that contrarians hold up as evidence against greenhouse warming:

Claim: In the mid-20th century, CO2 levels rose but temperatures dropped.

Fact: Obviously, there are other influences on global climate as well as CO2. Since the 1970s, CO2 has clearly emerged as the dominant influence, and temperatures have risen by 0.17°C/decade. However, earlier in the 20th century when CO2 was rising more slowly, there were large variations in the temperature trend, most notably an accelerated warming up to the 1940s and a slight cooling from the 1940s to the 1960s.

Radiative forcing is a measurement of how much energy the planet is gaining or losing due to a particular influence on climate. To fully explain the observed temperature changes, we have to take into account all forcings, not just CO2. Other important forcings include solar activity (natural), other greenhouse gases (in this case anthropogenic), and aerosols (a mixture of natural and anthropogenic). The latter was probably the most important factor in mid-20th century climate change.

Aerosols are tiny particles like sulphur dioxide (SO2) that reflect sunlight and cool the Earth. Unlike CO2, aerosols are removed from the troposphere within days and the stratosphere within weeks (the troposphere and stratosphere are the lowest and second lowest layers of the atmosphere, respectively). Large volcanic eruptions naturally release large amounts of SO2 into the stratosphere and cause a temporary cooling.

Industrial activity has not only increased the amount of greenhouse gases in the atmosphere, but also the amount of aerosols, somewhat counteracting the greenhouse effect. The resulting increase in aerosol forcing appears to have been the main reason for the pause in the warming trend in the mid-20th century. (Because anthropogenic aerosols generally do not get into the stratosphere, they tend to remain at the latitude where they were emitted. This explains the fact that while the Northern Hemisphere saw three decades of dramatic cooling, the Southern Hemisphere experienced only a slight slowdown of warming.) However, the increasing influence of greenhouse gases has now won out over the more slowly increasing influence of aerosols.

Also vying for climatic dominance in the last century was the Sun. Solar activity rose constantly during the early 20th century as it came out of the 19th-century Dalton Minimum, adding fuel to the otherwise slow warming caused by CO2. But solar forcing flattened out around the 1950s, probably also contributing to the warming pause. Since then, solar forcing has remained relatively constant except for its usual 11-year cycle.

When all known influences (both natural and anthropogenic) are factored in to give the net forcing, it turns out that the result quite accurately matches global temperature changes over the last century.

Net forcing compared to global surface temperature during 1900-2003. (Source: Skeptical Science, created from NASA GISS data)

The early-20th-century warming is only partly due to greenhouse gases, and partly due to solar activity and a relative lack of volcanic activity. The mid-20th-century cooling is explained by allowing for the effect of aerosols emitted by both humans and volcanoes. Since then, large volcanic eruptions have continued to cause temporary dips in global temperature, but without reversing the warming trend. Only the anomalous high temperatures of the early 1940s remain a mystery (though it has been suggested this is an artifact due to changing measurements during World War II).

So in conclusion, the temperature changes observed during the 20th century are explained by the interaction of multiple forcings, not just CO2. However, the latter has now emerged as the dominant forcing, and it’s set to continue growing as we keep adding CO2 to the atmosphere.

Update 19 July 2010: I recently came across an interesting post on Skeptical Science about mid-century cooling. It turns out that during that infamous mid-century period, although the global dimming effect of aerosols masked the global warming effect of greenhouse gases in the daytime, the minimum temperature during the night (when there is no sunlight to block) continued to rise. In the late 20th century, as greenhouse gases became a greater climatic force than aerosols, maximum (daytime) temperatures began rising at nearly the same rate as minimum (nighttime) temperatures. (However, we should not conclude from this that the subsequent warming was caused by global brightening. The dimming trend did not reverse until about 1990, when greenhouse warming was already well underway; and despite the recent brightening, there was still a net dimming trend from 1960 to 2010.)


Skip to comment form

    • rogerthesurf on 30 April 2010 at 15:12
    • Reply

    I bet you won’t publish this comment, however don’t worry, I will post it under your URL on
    However my readers will be eagerly waiting for your reply.
    Feel welcome to check it out.

    Anyway here is some interesting info re world co2 levels

    For some reason the IPCC and associated agencies and scientists in their graphs of atmospheric CO2 are using proxies til 1957.
    Proxies as you are no doubt aware, are in this case, the calculation of CO2 by examining ice cores or tree rings. Obviously subject to an unknown margin of error.
    An example is “Global Temperature & CO2 Concentration Since 1880. Data from NOAA’s National Climate Data Center (NCDC) & Oak Ridge National Laboratory.” (Note the use of ice core proxy until 1957)

    Surprisingly there appear to be authorative data of world CO2 levels scientifically recorded directly from as early as 1812

    Here are the direct measurements of atmospheric CO2 taken since 1812.

    Notice a few “minor” differences?

    Here is a summary of the paper explaining the direct measurements.

    Note the stark difference in the measurements.

    Question: If these data has been available all along, and there can be no doubt that the accuracy has to be infinitely superior to that of proxies, why are they not used by the IPCC?

    Hope you find this interesting

    Cheers Roger

    1. There are all sorts of problems with those old measurements, which you can read about here.

      Firstly, those measurements were generally not made with global CO2 levels in mind. For example, one method was used for measuring the level of CO2 in exhaled air, which is actually measured in parts per thousand. The surrounding air was used to calibrate the instrument, and if it reported any value between 200 and 500 ppm then the instrument was considered accurate!

      Secondly, local CO2 measurements vary with elevation and wind speed during a single day, only getting close to the background level when winds are stronger. The bias is usually upward, so averaging such measurements gives unrealistically high numbers. This means the minimum values are likely to be closer to the truth, and indeed, the minimum values given by the historical measurements are mostly below the level of 310 ppm that we would expect to see from ice cores and Mauna Loa.

      The inaccuracy of the old measurements was precisely the reason for monitoring CO2 from Mauna Loa in the first place.

      Finally, there’s no way that 100+ ppm of CO2 could have just disappeared from the atmosphere in less than a decade, as is supposed to have happened during the 1940s. A change on that kind of scale should take millennia (at least in the absence of human interference). Also, doesn’t it seem pretty odd that as soon as scientists started making accurate background measurements at Mauna Loa, CO2 levels stopped fluctuating wildly and changed by less than 3 ppm per year?

      All this strongly suggests that the ice core measurements really are fairly close to reality.


    • Paul Pierett on 30 April 2010 at 21:04
    • Reply

    Chemicals do not warm the earth. At best they retain temperature.

    The drop in temperatures was due to a rather flat sunspot cycle from 1964 to 1975. That cycle sat between the incline of three cycles that began in 1934 and a decline in cycle that ended in 2007.

    We are now in a solar minimum that will drop temperatures to unknown numbers. Severe winters are ahead with short or no summers roughly above the latitude through St.Louis.

    Thought you should know.


    1. That seems pretty unlikely. Even if the Sun went into another Maunder Minimum the effect would still be completely overwhelmed by greenhouse gases.

    • rogerthesurf on 3 May 2010 at 13:03
    • Reply


    Thanks for your reply.

    I read through your links and the references thereon.

    The point I am making in my comment is

    1. This direct data exists.

    2. The data has a well published margin or uncertainty/error.

    3. Why did the IPCC ignore this data without any explanation.

    There are also two serious assumptions in your comment.

    1. You are saying in your comment “there’s no way that 100+ ppm of CO2 could have just disappeared from the atmosphere in less than a decade” which is scientifically a very dangerous thing to do, but admittedly it is appearing to be a common philisophy in current day science. That is, you ignore data that does not fit your expectations or model.

    2. You are without question, accepting the accuracy of the ice cores.

    All data has its problems, including ice core data.

    I spent sometime searching for any publishing of the calculated error or uncertainty of the ice core records especially with respect to CO2 (such as are published for the direct measurement in Beck’s papers).

    I found no such figure, if you come across such figures in a proper research paper, please let me know.

    The refusal/neglect to publish any study of the uncertainty of this data is deeply suspicious in itself. I personally think that this is incredibly unscientific and misleading. I can only include that there are absolutely no callibration studies carried out on ice core data which is used by the IPCC and therefore the accuracy is completely unknown.

    Ice-cores suffer from several problems. The major one is the chronological problem. It is difficult to calculate which years the trapped gases refer to.

    This above serious enough in itself but CO2 in ice core samples, especially being a water soluble gas , suffers from diffusion and can be dispersed an unknown distance through verticle layers etc. And worst of all the ice-cores are subject to contamination from the very drilling, transport and storing process. See Steig 2008

    I am aware of only one other study of these problems which is quite old, but no doubt relevant for even today.

    Oescheger, Stauffer, Neftel, Schwander and Zymbrunn

    On page 3. and others the author discusses how the released data were selected from the scatter of data provided by the ice cores. Note the scatter is as much as 170ppmv but the policy is to use the minimum values, and near the conclusion, the given reason for this policy is that it gave the best fit for the current known data.

    As I said, all data has its problems and all data, their methods of extraction and use should be well published and be available for discussion.

    My criticism is not intended to imply that any data is neccesarily incorrect, but to point out, like I said above, all data has its problems and the apparent failure of the IPCC to take into account earlier direct data and place reliance entirely on the ice core data ,which in turn appears influenced by contemporary estimations of CO2 concentrations (obviously not of the direct kind), without some rigorous scientific explanation, is in my opinion pretty serious.



    1. Perhaps I should have said “there’s no *known* way that 100+ ppm of CO2 could have just disappeared from the atmosphere in less than a decade”. I am aware that the history of science contains many examples of ideas that were initially dismissed because they were thought to be impossible. (In fact, this happened with AGW — in the early 20th century climate scientists thought that CO2 was saturated, so our emissions couldn’t cause global warming, but we now know this is not true.) But still, I think the evidence has to be a bit more solid before we should accept that such a thing happened in the 1940s.

      Also, it’s not just one ice core that the IPCC relies on, but several different ones all in broad agreement.

    • rogerthesurf on 6 May 2010 at 09:56
    • Reply

    I’m sure all ice core samples face the same issues.
    Why they choose to use only the lower values instead of doing some sort of regression analysis on the total result is definitely a worry to me.

    I take it you do realise that had they used the median values instead of the lowest ones, they would have had similar results to Beck’s published results which according to his paper are compiled 137 yearly averages from 175 different scientific papers.
    Jaworowski also roundly critcises the ice core results. Its worth a read, but he does exhibit some bias I think.

    Anyway I agree that these papers are disturbing and should make us generally a lot more critical about what we read.



    • Jim Shewan on 8 May 2010 at 07:05
    • Reply

    First visit to this site and I must say I’m impressed.
    Quite funny how “Roger the Smurf” started out with an absolutely devastating demolition of the accuracy of the co2 record that he knew would not be published because it completely destroyed conventional climate science but has rather shifted the goalposts and now thinks that there must be some sort of unspecified error somewhere in the ice core record.

      • rogerthesurf on 8 May 2010 at 08:12
      • Reply


      I never tell anyone what to believe, I simply point to evidence and holes in logic etc.

      I suggest you read all the references carefully and come to your own unbiased decision.

      Just note that the ice core data does seem to cover the direct measurements as collated by Beck but for some reason the data has been selectively cherry picked to fit in with other assumptions.

      I still think that is very devastating. What do you think?



      1. Roger,

        All scientific measurements have uncertainties attached to them. If anything, the fact that scientists have identified the main uncertainties (and can quantify most of them) gives me more confidence in the measurements, not less. Steig mentions nothing that suggests an uncertainty of 100 ppm or more. But he does say that the timescale uncertainty in a 200-year ice core is only +/- 2 years! That sounds pretty good to me.

        Whatever uncertainties remain, I’m sure the science of ice cores has progressed considerably in the last 30 years, so the 1982 paper is probably outdated. Still, reducing uncertainty can only be a good thing, and I expect we will have even better measurements in the future.

        Your last link appears to come from the “Executive Intelligence Review”, and includes an ad for a magazine called “21st Century Science and Technology”. Both are published by a political group led by Lyndon LaRouche, who subscribes to a number of bizarre conspiracy theories. Among other things, he believes that science is controlled by a British priesthood; that the Black Plague was caused by Italian bankers, that Hitler was put in power by the British; that the Beatles and LSD were psychological weapons; that 9/11 was an inside job; and that the world is secretly controlled by an elite and evil group of anti-human aliens, who are attempting to censor and assassinate LaRouche. I could go on.

        So where should we get our scientific information from — scientific organizations and reports, or conspiracy theory magazines? Hmmm, tough choice, but I think I’ll go with the scientific sources.

    • Jim Shewan on 8 May 2010 at 15:22
    • Reply

    How anyone can take drivel like that seriously is beyond me. That hory old denialist point about the tempt leading co2 was bought up again. How is co2 expected to lead temp changes bought about by Milancovic cycles and solar variation.
    Of course if you look at historical co2 measurements they are going to be all over the place. Back in Tyndals time(1860-70)it was known that if you were downwind of a paddock of sheep the reading would be higher and sunday in the middle of a city produced lower readings than a week day. Also looking back at historical readings we often don’t know the context in which they were taken.Isn’t it strange that all these anomalous readings have happened before a comprehensive monitoring network was put in place and now the increase is a steady increase. I must say at least the authors of the paper had the courage to predict cooling temps are on the way. By 2020 it should be fairly obvious weather temps are still climbing or not.

    • rogerthesurf on 10 May 2010 at 09:24
    • Reply


    I think scientific papers should be judged on their content, logic and the strength of the references. Just because a dodgy website has used it should not detract or otherwise from its validity. As a matter of fact I did not find it on that particular site myself.

    Anyway, to get to the real point, the biggest issue is the fact that there appear no scientific calculation,estimation or even opinion on the accuracy of ice core data of any age.
    (Please note that Beck does include a margin of error in his paper with respect to his collation of results)
    In my opinion, especially seeing the scatter of results from ice core analysis in Oescheger, the absence of a margin of certainty in every publication involving ice core data is a serious omission.
    Steig by the way, mentions 5 areas of uncertainty which are not clear whether they are altogether are a cumulative uncertainty of have a different relationship with each other.
    Note in Jaworowski, P43, he claims that the data has been moved 83 years in order to become compatible with the Mauna Loa curve. He gives some references in support of this “(Friedli et al. 1986,Neftel et al. 1985” Perhaps you should check these out as well, I think they are available from Nature for a small fee.
    Keep looking out a calculation of uncertainty as well because I think that is a key thing.



    1. I Googled Jaworowski’s article and found, as I expected, that it was published in the Executive Intelligence Review, in the March 16, 2007 issue. The PDF can be found here and appears to be identical to the one on your website. Not only that, but a similar article was published in 21st Century Science and Technology, in the Spring-Summer issue. Neither the EIR nor 21st Century is a scientific journal; they are magazines that exist to promote the views of the LaRouche movement. The point is that this article hasn’t been published in a peer-reviewed scientific journal.

      As for ice core uncertainties and adjustments, I’ve already said that the uncertainties are probably a lot smaller now than they were in 1982, and if you’re not going to bother following up Jaworowski’s references, then I’m not going to do it for you.

        • rogerthesurf on 14 May 2010 at 08:34
        • Reply


        Any paper should be judged on its references and logic.
        For instance we had a scandle in my country where some academic published a paper “proving” the holocaust never happened (And it was peer reviewed as all PhD papers must be).
        I don’t think any of the holocaust survivors thought much of it though.

        “uncertainties are probably a lot smaller now than they were in 1982”

        Once again you are believing what you like without recourse to facts.

        We should be very sceptical about this until we see published margin of error calculations on ice core data.

        Oh talking of peer reviews, check out this site which describes an audit the did of IPCC reporting references etc.

        Yes I know you will say the site is unreliable because it generally disagrees with your beliefs, but try and keep an open mind as you check out the methodology and check out the research before you form an opinion.



        1. Roger, in the early 1980s, paleoclimatologists were only just starting to look at CO2 in ice cores. In the first ice core records, they found a correlation between temperature and CO2 levels over the last several hundred thousand years. In more recent records they found, as you probably know, that temperature changes led CO2 changes by several centuries. (The reason for this is discussed here.) Surely this represents a narrowing in ice core uncertainty.

          On the subject of peer review: I don’t know what kind of peer review they have in historical journals; in any case, peer review doesn’t mean that a paper is necessarily right, just that it is publishable. But that’s beside the point as your reference wasn’t peer-reviewed anyway.

          I already know about the IPCC “audit” and have read some of the results. I don’t know how good their rating system is, but I must say I was surprised. I certainly didn’t realise the IPCC relied so heavily on “grey literature”; perhaps there is a case for tighter procedures. However, let me add some perspective: these contrarians gave an A to nearly all of the chapters in WG1 (the main report, about the physical science). One blogger has calculated that the WG1 report gets an A for 93% of the 6,226 sources cited being peer-reviewed.

    • rogerthesurf on 16 May 2010 at 09:00
    • Reply


    To have 431 of references not peer reviewed in an IPCC chapter, considering their claims and the resources at their disposal is pretty shocking by anyones standards. The people at No Consensus are being very generous I think. I think the blogger you cited spent all of 10 minutes to come to his conclusion.

    As for the ice cores, I might change my opinion, IF someone actually publishes a paper showing proper calculation of the margin or error/uncertainty for their conclusions.



    • Jim Shewan on 16 May 2010 at 09:18
    • Reply

    Not shocking provided it is made clear where the sources come from. Various government reports from the IPCC members are not classed as peer reviewed but often represent the most up to date information which in a lot of cases is only easily available to govt agencies for publication.

    • rogerthesurf on 16 May 2010 at 12:16
    • Reply


    Yeah, and they live up to their claims right?

    “People can have confidence in the IPCC’s conclusions…Given that it is all on the basis of peer-reviewed literature.” – Rajendra Pachauri, IPCC chairman, June 2008

    “The IPCC doesn’t do any research itself. We only develop our assessments on the basis of peer-reviewed literature.” – Rajendra Pachauri, IPCC chairman, June 2007

    “This is based on peer-reviewed literature. That’s the manner in which the IPCC functions. We don’t pick up a newspaper article and, based on that, come up with our findings.” – Rajendra Pachauri, IPCC chairman, June 2008

    As IPCC Chairman Rajendra K. Pachauri recently stated: ‘IPCC relies entirely on peer reviewed literature in carrying out its assessment…'” – US Environmental Protection Agency, December 2009

    “When asked if the discussion paper could be taken into consideration…[Pachauri] said, ‘IPCC studies only peer-review science. Let someone publish the data in a decent credible publication. I am sure IPCC would then accept it, otherwise we can just throw it into the dustbin.'” – Times of India, November 2009

    “This is the key document on climate change, and from now on you can forget any others you may have read or seen or heard about. This is the one that matters. It is the tightly distilled, peer-reviewed research of several thousand scientists” – Irish Independent, November 2007

    “Make no mistake about how central the IPCC is to the global warming debate. The IPCC’s reports are why ours and other governments…are calling for reductions in greenhouse gas emissions…[those] attacking the IPCC…have never researched nor published any climate science in peer-reviewed journals – and peer review is how science works.” – ABC News, Australia, November 2009

    “The IPCC bases its work on papers that have been published in the peer-reviewed scientific literature.” – The Economist, December 2009

    “The IPCC does not do scientific research itself, but builds its assessments on peer-reviewed and published scientific papers. ” – ABC News, Australia, February 2007

    “The IPCC relies on the peer-reviewed scientific literature for its conclusions, which must meet the rigorous requirements of the scientific method…” – Joe Romm,, February 2008

    “The first phase [of the IPCC report] will be released in Paris next week…The report will draw on already published peer-reviewed science.” – CBC News, Canada, January 2007

    “Without a strong, peer-reviewed science base [provided by the IPCC]…the case for action on climate change would not be as unequivocal as it is today.” – Ban Ki-Moon, United Nations Secretary General, August 2008

    “[The IPCC report] used only peer reviewed published science…” – Associated Press science writer Seth Borenstein, February 2007. This story appeared in newspapers large and small in countries that included Russia, Canada, and the United States.

    “The [IPCC] report will draw on already published peer-review science.” – Associated Press science writer Seth Borenstein, January 2007. This story appeared in newspapers in countries that included South Africa and the United States.

    “The knowledge of climate change contained within peer-reviewed scientific publications is periodically assessed by the Intergovernmental Panel on Climate Change.” – Science, November 2008.

    “Journalists must follow basic principles for screening evidence – making sure, for example, that scientific research is properly peer reviewed. The Intergovernmental Panel on Climate Change (IPCC) is a particularly valuable source of information on climate change…” – The New Nation, Bangladesh, September 2009

Leave a Reply to James Wight Cancel reply

Your email address will not be published.