Tag Archives: literature

Delingpole interview on the success of polar bear conservation & failed survival models

Tomorrow I will be giving a public lecture in Paris on polar bear conservation success and the spectacular failure of the polar bear survival models used to scare children senseless.

However, while I was in London a few days ago I spoke with James Delingpole, author and columnist at Breitbart who has recently taken to producing podcast and video interviews.

Chukchi Sea polar bear Arctic_early August 2018_A Khan NSIDC small

Yesterday, he posted a column summarizing our discussion, with a link to the entire podcast: “WATCH: Canadian Professor Lost Her Job for Telling the Truth About ‘Endangered’ Polar Bears.” Read it here.

Continue reading

Harvey et al. attack article mum on real selection process for polar bear papers used in their analysis

The Harvey et al. Bioscience article that attacks this blog and others that link to it — a veritable tantrum paper that took 14 people to write — included a sciency-looking analysis of peer-reviewed articles said to have been retrieved by the database “Web of Science” using the search terms “polar bear” and “sea ice.”

Temper-Tantrum graphic

“Consensus science pounds the floor and chews the carpet in angry frustration.” [mpainter, 25 December 2017]

Other critics have pointed out that the Harvey paper used 92 such references:

“Of the 92 papers included in the study, 6 are labeled ‘controversial.’ Of the remaining 86, 60 are authored or co-authored by Stirling or Amstrup, or Derocher. That is, close to 70% (69.76%) of the so-called ‘majority-view’ papers are from just three people, 2 of whom wrote the attack paper themselves.” [Shub Niggurath, crossposted at Climate Scepticism, 14 December 2017]

The bias of co-author papers used to represent the “expert consensus” on polar bear biology is only one problem with this particular attempt at making the Harvey paper look like science: in fact, the short list of papers used for analysis is a far cry from the original number returned by Web of Science for the search terms the authors say they used in the supplementary information.

How that large original number (almost 500) was whittled down to less than 100 is not explained by the authors. As a consequence, I can only conclude that the “methodology” for paper selection was likely defined after the fact. While the method of paper selection sounds simple and reasonable, apparently not one of the Harvey et al. paper’s co-authors checked to see if it was plausible (or didn’t care if it was not).

Continue reading