Ars Technica wrote:

"The authors simulated the behavior of Thwaites using a number of different melting rates. These ranged from a low that approximated the behavior typical in the early 90s, to a high rate of melt that is similar to what was observed in recent years. Every single one of these situations saw the Thwaites retreat into the deep basin within the next 1,000 years. In the higher melt scenarios—the ones most reflective of current conditions—this typically took only a few centuries."

I read the article and I saw how they are constructing their models. For starters they are biasing the data by selecting low/high melt rates from a very narrow range ( the '90's to "recent years"). Then, within that range, they cherry-pick the rates they like and construct the model around that variable. A "variable" they will hold constant in the run to create their future projections. Finally, they go to press with the results that fit their original assumptions. The only problem with this is that nothing in nature stays constant. That, and the fact that all models contain the bias of your original assumptions, so all models are flawed out of the starting gate, in more ways than one.

I have constructed and worked with a number of models over the years, using the best available software and I know a few things about the accuracy of projected results. In several cases, I had the opportunity to monitor the actual results and reconcile back to the original model. None of them were within a prediction range acceptable to bean counters, which is why most modelers get to work for lots of different companies.

On average, simple models can produce an accuracy of plus or minus 15% over a short range. More complex models can turn into nothing more than witchcraft very quickly because the interaction of a higher numbers of variables can compound error exponentially. On top of that, I don't give a s**t who you are; you don't know what the box is doing with the iterations from input to spit-out, because it's too complex to check.

Predicting a distribution of gold grades in an 2500' x 2500 x 500' cube with a million data points, should be a hell of a lot easier than predicting complex weather over a range of 10, 50 or 100 years.

It isn't.

If you don't believe me, ask your pal Gunnar how easy it is. If he's had any experience with modeling, I'm sure he isn't betting his life on any of them.

All of these predictions of future Armageddon are completely bogus because it's impossible to build complex models that will predict the future any better than a Carney with a crystal ball.