View Single Post
Old 12-04-2009, 12:13 AM   #244
PKFFW
Wizard
PKFFW ought to be getting tired of karma fortunes by now.PKFFW ought to be getting tired of karma fortunes by now.PKFFW ought to be getting tired of karma fortunes by now.PKFFW ought to be getting tired of karma fortunes by now.PKFFW ought to be getting tired of karma fortunes by now.PKFFW ought to be getting tired of karma fortunes by now.PKFFW ought to be getting tired of karma fortunes by now.PKFFW ought to be getting tired of karma fortunes by now.PKFFW ought to be getting tired of karma fortunes by now.PKFFW ought to be getting tired of karma fortunes by now.PKFFW ought to be getting tired of karma fortunes by now.
 
Posts: 3,791
Karma: 33500000
Join Date: Dec 2008
Device: BeBook, Sony PRS-T1, Kobo H2O
Quote:
Originally Posted by XNN View Post
I agree with most of this statement, although I would argue that the climate modeling/climate observation sides of the equation are perhaps equally useful in climate science. The failure of models has, with targeted or improved observations, often led to new insight into physical climate processes, thus leading to better models, etc.

Climate models, or at the very least, the atmospheric general circulation modeling components of climate models, are quite thoroughly verified on a day-to-day basis... as weather prediction models. There has been a clear and documented improvement in forecast skill from numerical weather prediction models over time -- all jokes about weather prediction aside -- and these improvements feed into the climate models. There is a long way to go, but progress is being made.

Another thought. While fully coupled earth systems models are beyond the capability of individual researchers to run -- the complexity and computer resources are immense -- models of intermediate (and lesser) complexity are available to pretty much anyone. A skeptical climate scientist could, for example, code up and insert their own set of cloud feedback processes into such a model, run a suite of simulations, generate results, and write them up.

The point here is that I don't see a raft of simulations hitting peer review that contradict results from the current "consensus" of climate modelers, despite the fact that such results would be quite publishable if they seemed reasonable. Having been involved in the review process (as a reviewer) for a fair number of papers (say 10-20 per year) I haven't personally come across any quashing of well reasoned scientific experiments and simulations in climatology. (The UEA email messages on this point are a bit disturbing to me.)
Totally agree that modelling would have improved over the years and can be(probably is) very good and accurate. I'm sure it can aid in our understanding and further our knowledge too if used correctly.

You say that the failure of models has, with targeted and improved observations, led to new insight into physical climate processes. I have a couple of questions.
1: I assume by "fail" you mean it comes up with a result that just seems totally wrong or is complete gibberish? Or is there some other type of failure like the program freezes or something?(honest questions as I thought a model simply told you what was supposed to happen so if the model say XYZ will happen then how can that be a "fail" unless we already know what will happen and then why do we need the model?)
2: These targeted and improved observations, are they of the physical processes of the climate or are they of the model? I mean, when the model fails do you just go over the model and tweak it until it doesn't fail anymore by adjusting values and such(and then you have to ask if the model was correct and only failed because it didn't do what we wanted it to or if the model is actually broke, see question 1 above) or do you go out and look at the climate some more and try to work out what is going on in the climate that isn't accounted for in the model and that is why the model failed?

I guess my point goes more towards the fact that modelling is, by its very nature, basically an extrapolation of our understanding and assumptions combined and not empirical data. If that understanding and those assumptions are faulty in any way to begin with, then that will influence the results of the modelling.

As my brothers experiments show, if we don't know the modelling program is fundamentally flawed because of one or more of our assumptions then we wont even question the outcomes of the models will we?(still waiting for links from him, he should be awake in another couple of hours)

Just something to consider when claiming that the science is "settled" is all.

Cheers,
PKFFW
PKFFW is offline