Index relies entirely on the support of donors and readers to do its work.
Help us keep amplifying censored voices today.
The US Government has asked two scientific journals to censor data on bird flu. Nature and Science were asked by the US National Science Advisory Board for Biosecurity to publish redacted versions of studies by two research groups that suggests the H5N1 avian flu could spread quickly among humans. The laboratory-made version of bird flu covered in the data could easily jump between ferrets — a sign a mutated form of the virus could spread among humans. The journals are objecting to the request, saying it would restrict access to information that might advance the cause of public health.
Read more about censorship and science in “Dark Matter,” the latest issue of Index on Censorship magazine. You can also read the entire issue for free (until 22 December) on our Facebook page
Open access now
Scientists under attack
Creationists in the classroom
Smog rules – how Obama let us down
Studies to test the safety and efficacy of drugs and medical devices are too often never made public, putting lives at risk. Head of Investigations at the British Medical Journal, Deborah Cohen reports
Transparency is at the heart of medical science. Every day decisions are made about when to stop and start treatment and how best to invest large sums of money in ways to protect the public from disease. All these rely on knowing as much as possible about the benefits compared to the risks of action or inaction.
No medical treatment is perfect or suitable for everyone — that’s why balancing risks and benefits is crucial. But healthcare is big business; it’s where science meets big money and not all research evidence makes it into the public domain — specifically into medical journals where doctors and academics glean their information.
Medical history is replete with examples of the benefits of a treatment being overhyped and potentially serious side-effects being buried, leading to poor decisions. This wastes public money and can cost lives.
Take the case of the drug lorcainide, used to regulate the heartbeat during a heart attack. In the early 80s, researchers in Nottingham carried out a study of the drug in 95 people using a method known as a randomised control trial. They noticed that nine out of the 48 people taking the drug died, compared to only one out of 47 who got a sugar pill, or placebo, instead.
At the time, the researchers thought that the high number of deaths in those given lorcainide might have been due to chance rather than the drug itself. For commercial reasons, the drug was not developed any further and the results of the trial were never published. However other, similar, heart drugs did make it onto the market and were widely used. But they too had serious safety problems and many were withdrawn.
According to Sir Iain Chalmers, a long-standing champion of transparency in medical research, the lorcainide trial might have been an early warning of trouble ahead for these other heart drugs. At the peak of their use in the late 80s, these medicines are estimated to have caused between 20,000 and 70,000 premature deaths every year in the US alone.
This is a particularly stark example of what might happen when critical evidence remains unavailable to doctors and researchers. Even when individual drugs do make it onto the market and have overcome the regulatory hurdles, information about their risks and benefits might well be hard to come by.
In western countries, legislation dictates that companies have to provide regulators with a thorough scientific dossier on all trials conducted on a drug so the data can be scrutinised before the drug is allowed onto the market. They are then required to do follow-up studies looking at any adverse reactions that might not have been picked up in the pre-market research. They must inform the authorities about what they find.
Many companies, however, have been reprimanded — mainly in the US courts — for hiding troubling side-effects of drugs, including: anti-depressants, such as Seroxat (known as Paxil in the US; generic name paroxetine) and painkillers, such as Vioxx (rofecoxib).
But it’s not always the companies which are unforthcoming about safety concerns; the regulators have dragged their feet too. Last year, the diabetes drug Avandia (rosiglitazone) was suspended from the market in Europe and severely restricted in the US because of an increased risk of heart problems. But this was long after both the manufacturer, GlaxoSmithKline (GSK), and the US regulator had reason to suspect an increase in serious side-effects.
Rather than the regulators — whose remit is to protect the public — it was the actions of the then New York attorney general, Eliot Spitzer, in a 2004 court case of GSK’s Seroxat, that led to the side-effects of Avandia coming to public attention. As part of a settlement with the state over its hiding of data on heightened suicide risk in teenagers who took the drug, GSK agreed to post results from its recent clinical studies on a website. And this included studies of the drug Avandia, many of which had been unpublished until then.
Three years later, Dr Steven Nissen, chairman of cardiovascular medicine at the high-profile Cleveland Clinic in the US, decided to analyse all the studies of Avandia on the website. Using a research method called meta-analysis, he pooled all the results together to see what they said overall. He found that the risk of having heart problems in people with diabetes who took the drug rose by 43 per cent compared to those who had diabetes and did not take it.
The following years entailed investigations into GSK’s conduct by the US Senate; intense deliberations by national drug regulators; questions about how we regulate medicine; and now pending class actions. But what really broke the case open was enforced transparency.
‘It’s important to realise what an important role publicly available trial results data played in the rosiglitazone story’, said Jerry Avorn, professor of medicine at Harvard Medical School.
During an investigation in collaboration with BBC’s Panorama in September 2010, the British Medical Journal looked into the different drug regulators’ attitudes towards transparency. In the US, the Food and Drug Administration’s (FDA) advisory committee discussions are held in public in front of the national press. Most of the relevant scientific documents are made available on a website in advance. Before the deliberations start, each panellist is required to declare any conflicts of interest in line with US legislation to increase transparency.
But gaining an overall perspective of discussions within the European and UK regulators was far trickier. The BMJ attempted to speak to people who had sat on panels for them both, but they were bound by confidentiality clauses. Nor would Europe’s regulator release the names of the members of the scientific advisory group discussing the drug under the Freedom of Information Act (FOIA).
Doctors and the public in the UK had not been told that the national regulator had voted unanimously to take Avandia off the market several months before the European agency came to the same decision. If the European vote had gone the other way, who knows if the views of the UK’s panel would ever have been revealed.
Some say that open discussions and more transparency do not necessarily lead to better decisions. But documents obtained from the European regulators under the FOIA showed that advisers had concerns about Avandia’s side-effects from the outset. And knowing about these could have lent support to other academics who were ‘intimidated’ by the company, according to a 2007 report by the US Senate Finance Committee.
In 1999, when the drug was first licensed, Dr John Buse, a professor of medicine at the University of North Carolina who specialises in diabetes, told attendees of academic meetings that he was concerned that while Avandia lowered blood sugar, it also caused an increased risk of heart problems.
Concerned about the effects that his comments would have on their drug that had been touted for blockbuster status, executives at GSK (then SmithKline Beecham) devised “what appears to be an orchestrated plan to stifle his opinion”, the Senate Finance Committee report stated — in the light of internal company documents it had seen.
The report goes on to state that GSK executives labelled Buse a “renegade” and silenced his concerns about Avandia by complaining to his superiors and threatening a lawsuit. GSK prepared and required Buse to sign a letter claiming that he was no longer worried about cardiovascular risks associated with Avandia. Then, after he signed the letter, GSK officials began referring to it as Buse’s “retraction letter” to curry favour with a financial consulting company that was evaluating GSK’s products for investors. GSK has denied all allegations in the report, describing them as “absolutely false”.
Years later, Buse wrote a private email to a colleague detailing the incident with GSK: “I was certainly intimidated by them. … It makes me embarrassed to have caved in several years ago.”
Meanwhile, over on the other side of the Atlantic, EU drug agencies were drawing similar conclusions that the drug increased the risk of heart problems during their premarket discussions. In March 2000, Buse sent a letter to the FDA, saying Avandia might raise patients’ risk of heart attacks, and he criticised the company’s marketing, saying it employed “blatant selective manipulation of data” to overstate the drug’s benefits and understate its risks. Doctors may not have prescribed the drug if they had known from the outset there were issues around its safety.
But data transparency doesn’t just mean exposing harm done, it can also help to establish how well something works —- and that reported benefits aren’t just hype. Major international decisions are made on how best to tackle impending health crises based on how well a medical invention works as reported in journals, for example the UK government’s decision to stockpile the influenza drug Tamiflu.
Back in 2009, during the swine flu pandemic, the internationally respected Cochrane Collaboration, a network of independent academics, was commissioned by the NHS to look at the evidence about the benefits and risks of using Tamiflu — a drug the UK had spent around £500m on to treat all those infected in the outbreak.
The academics, led by Christopher Del Mar at Bond University in Australia, scoured the medical literature to find all the different relevant studies of the drug to pool together all the results to see what they said. They were also aware that there had been reports of suicides in Japan — the biggest consumers of Tamiflu — and they wanted to find out more.
But when they went about surveying the medical literature, not all of the trials they knew existed about the effects of the drug in healthy people appeared in the medical press. To fairly reflect the evidence, they needed to know exactly what all trials said. But they couldn’t access all the data they needed — the majority of trials were unpublished. This included the biggest, and therefore arguably the most important, trial conducted.
The UK government at the time had based its decision to stockpile Tamiflu in such large quantities on one particular piece of research published in 2003. This paper showed the dramatic benefits of giving Tamiflu to healthy people who got the flu and not just those who were at particular risk of getting sick. It claimed that the drug reduced the number of people taken to hospital with the flu by a half and reduced serious complications by around the same amount. Little wonder that health officials, concerned about the strain on the NHS, stockpiled the red and yellow pills in such vast quantities.
But this piece of research was funded by the drug’s manufacturer, Roche. It relied upon eight unpublished studies, each given code names, and used the company’s own statisticians to draw conclusions about the data. The two independent researchers named on the paper — who are supposed to be accountable for the content of the research — could not produce the unpublished studies when the Cochrane Collaboration asked them.
Medical research relies heavily on the ability to replicate the findings of another piece of research. This helps to show that a finding wasn’t fraudulent or simply due to chance.
But the Cochrane Collaboration couldn’t replicate the 2003 findings. Its calculations based on the publicly available papers were at odds with the claims made and it needed to see the unpublished studies, so it turned to the company.
Despite asking Roche repeatedly for the full complement of research documents showing that Tamiflu would stop so many healthy people from going into hospital, the whole set were never forthcoming. What it did provide was limited in detail and not what the Cochrane Collaboration needed. Roche did nothing illegal — it is its commercial information. But its commercial information has huge repercussions for public health spend — both in terms of direct costs of the drug and its distribution, but also on what economists call the opportunity costs. Half a billion spent on Tamiflu is half a billion not spent on some other wonder drug.
Del Mar and his team were left to wonder if these bold claims really did stack up — and if the unpublished trials really were the best of the lot, why were they unpublished?
What should have been a straightforward exercise to confirm the evidence base for current policy and practice became instead a complex investigation involving the Cochrane Collaboration, the BMJ and Channel 4 News. Not only did this unmask the extent of unpublished data, it found that the person who actually wrote some of the journal papers was never credited — known in the trade as ghostwriting.
This is not the benign undertaking it is in celebrity autobiographies. Commercial medical writing firms team up with drug companies to draft a series of academic papers aimed at medical journals to promote a carefully crafted message. In the case of Tamiflu, it was that the drug helps to reduce serious complications.
The lead investigator author who was named on the biggest trial – which was unpublished — said that he couldn’t remember ever having participated in the trial when the BMJ/Channel 4 News asked him. And the investigation revealed that documents submitted to Nice (the National Institute for Health and Clinical Excellence) show different investigator names appended to the key Tamiflu trials at different points — nowhere is it totally clear who took overall responsibility for all of the studies.
In a later twist, an investigation the BMJ conducted with the Bureau of Investigative Journalism revealed that experts who had been paid to promote Tamiflu were also authors of influential World Health Organisation (WHO) guidance on the treatment and prevention of pandemic flu. Nowhere were their conflicts of interest made public, despite the WHO having a specific policy to exclude those with such major competing interests from crafting guidelines. And when the scientific evidence pointed to a serious global outbreak of swine flu in early 2009, the WHO pulled together an international expert panel called the Emergency Committee. Keeping up the trend of opacity that had been a recurrent feature of pandemic planning, the committee executed its decisions — which the former health secretary, Alan Johnson, said would lead to “costly and risky” repercussions — behind closed doors in Geneva. An internal WHO investigation conducted by Harvey Fineberg, president of the US Institute of Medicine, criticised the lack of transparency and timely disclosure of conflicts of interest in May last year.
After an inauspicious start — with experts from within the US regulatory agency saying the benefits of healthy people taking the drug were marginal at the outset — Tamiflu sales sky-rocketed. This, coupled with a mild strain of flu and an abject lack of transparency, allowed conspiracy theories to ferment that alleged the WHO was in league with big pharma and had fostered fears of a pandemic in order to boost sales of drugs. And with blogosphere rumours abounding, not only has the WHO’s reputation taken a hit, scepticism might well accompany future warnings of serious flu outbreaks.
Yet again the role of the regulators comes into the spotlight. Roche said that it had supplied all the required data to US and EU regulatory authorities. Only after five months of chasing drug regulators with FOI requests, asking for the full study reports of trials that Roche submitted for its market approval, did the Cochrane Collaboration get some of what it asked for.
“Open access should be the default setting for drug trials once the drug is registered. The public pay for the drug, the public should have access to the facts, not sanitised versions of them”, one of the Cochrane collaborators, Dr Tom Jefferson, said. He believes that drug regulators should make data accessible once a drug comes onto the market. Others suggest that the regulators should also publish the data of drugs that have failed to make it onto the market. That way the situation that happened with locainide would be avoided.
This, too, might be helpful for those charged with making decisions about which drugs health services should use, such as Nice. Writing in the BMJ last year, researchers from the official German drug assessment body charged with synthesising evidence on the antidepressant Edronax (generic: reboxetine) reported they had encountered serious obstacles when they tried to get unpublished clinical trial information from the drug company that held the data.
Once they were able to integrate the astounding 74 per cent of patient data that had previously been unpublished, their conclusion was damning: Edronax (reboxetine) is “overall an ineffective and potentially harmful antidepressant”. This conclusion starkly contradicted the findings of other recent studies that pooled the data published by reputable journals.
But the amounts of data submitted to regulators can be voluminous — another reason why overstretched and underfunded drug authorities could benefit from the safeguard of publicly available data that academics could analyse. The Cochrane Collaboration is now in possession of over 24,000 pages to peruse and distil. But this kind of volume doesn’t deter researchers; they are actively asking for it.
In June this year, Medtronic, a medical technology company, drew widespread criticism in the US for its alleged failure in published research papers to mention the side-effects of a spinal treatment it manufactures. Capitalising on the company’s dip in public opinion, Harlan Krumholz, professor of medicine and public health at Yale University, approached Medtronic to take part in a transparency programme for industry that he had set up. He wanted access to all data it had on file — published and unpublished — to commission two independent reviews of it to see what it really said about safety.
“Industry’s reputation has really dropped substantially. People are concerned. They’ve lost confidence and trust in these companies,’”Krumholz said, adding: “Marketing has sometimes gotten the best of the companies and there have been some episodes that have tarnished their reputation. So they are in great need to show to the public that they are really interested in the societal good and want to contribute in ways that are meaningful.”
The company obliged and described its move as “unprecedented in the medical industry”. Needless to say, not all companies are keen on having their data analysed by independent researchers. When Krumholz first approached manufacturers asking them to allow the scientific community to vet their data when safety concerns had emerged, he was rebuffed at every turn. Nevertheless, he hopes this will change and transparency will become expected rather than simply celebrated. He hopes his scheme will make it impossible for other companies — particularly when questions are being raised about the safety of their products — to simply say that they are not going to share all the information they have that may be relevant.
But there is a broader ethical aspect to selective publication. People often participate in clinical trials because they want to help grow scientific knowledge. And the very nature of many trials means there is a level of uncertainty of what a drug or device may do. This includes any potential benefits and it also involves risks.
According to Chalmers, those who don’t publish all the studies are betraying the trust of those who have volunteered themselves to medical science. “If a patient takes part in a clinical trial — which is essentially an experiment — they are doing their service to humanity and putting themselves at the disposal of science. Unless patients are explicitly told that the results won’t be published if the trial does not show what the researchers or the company want before they start the trial, there is a dereliction of duty on behalf of the researchers.”
Chalmers is uncompromising on what the fate of doctors who are complicit in the burying of bad results should be — they should face discipline that might include the loss of their right to practise medicine or conduct research. His mood reflects a growing concern about the moral duty of medical scientists to publish their results. Journal editors have railed against what they consider a distortion of the medical literature.
But for many years there has been comparative silence from organisations representing people conducting medical research. In the UK, the charge for transparency has been led by the Faculty of Pharmaceutical Medicine in London. Over a decade ago, it said:”Pharmaceutical physicians have a particular ethical responsibility to ensure that the evidence on which doctors should make their prescribing decisions is freely available.”
In June this year, the Royal Statistical Society followed suit and released a statement saying it is “committed to transparency in scientific and social research”. It said it is “crucially important that the results of scientific research should be made publicly available and disseminated as widely as is practical in a timely fashion after completion of the scientific investigation provided that there is no conflict with any legislation on confidentiality of data”.
Chalmers is critical of organisations who represent people conducting medical research — such as the Academy of Medical Sciences and the Royal College of Physicians — which refuse to sign up to a bill of transparency.
Attempts have been made to limit a researcher’s ability to hide trials that they may not want to come to light. Registers of trials sprang up. In 2005, the International Committee of Medical Journal Editors said its journals would only publish trials that were fully registered before they started — which should make trials that went missing much easier to spot. Then, in 2007, the US implemented legislation to ensure that all trials protocols are listed on a public searchable website called clinicaltrials.gov. Companies are supposed to update the information with changes or highlight when and where their research has been published. But the BMJ has found instances where the information on the website is out of date. And, unless someone goes through the database systematically to identify what studies have surfaced publicly, it’s hard to pin down exactly what impact the register has had on publication bias.
But once again, Europe trails behind in terms of transparency. The names of the trials being conducted in the EU appear on the EudraCT database. But crucial details of the study design and where it’s taking place are not on the website.
If data transparency is an issue for drugs, the opacity surrounding medical device governance is in a different league. Medical devices cover a wide range of products from adhesive bandages and syringes to heavy duty implantables, such as hip prostheses, pacemakers and stents.
Representatives of the drug industry marvel at how devices get away with a comparative lack of government and public oversight both in the US and the EU. Debates about the perceived flaws in the US system have been hammered out in public — the media weighing in on what they considered to be a failure of their regulators to protect the public adequately. Front page coverage of hip replacements failing and heart devices misfiring has forced discussions about inadequacies in their system into the US Congress.
But this has not happened to the same extent in Europe. One senior US official asked me why the European media has not scrutinised device regulation in the way that the American press had. In the States, Europe has been held up as an example of how bad things can actually get — with patients on this side of the Atlantic having been described as “guinea pigs”.
A joint BMJ/Channel 4 Dispatches in May this year didn’t do much to quell concerns. The EU system of approval by agreement between manufacturer and a commercial regulatory body operates under conditions of almost total commercial secrecy and is overseen in a hands-off manner by national regulatory authorities. Manufacturers submit data to a private body, which then assesses it to see if it is fit for market, and it is then allowed to display a CE mark. It is the same process that non-medical products such as mobile phones and toys go through.
As Nick Freemantle, professor of epidemiology at UCL, said: “The current European regulatory framework — CE marking — might provide sufficient safeguards for electric toasters and kettles, but it is not adequate for treatments that can affect symptoms, health related quality of life, serious morbidity and mortality.”
Representatives of device manufacturers say that the European light touch regulation approach is fine — that there is no evidence it is any worse than America’s. But, as the medical adage goes, absence of evidence is not evidence of absence.
There is no way of knowing what percentage of serious medical devices are faulty, poorly designed or have had to be recalled, because the European authorities have no centrally maintained register listing the devices on the market. In short, they do not know exactly what patients have had put into them in the first place.
Nor do they know on what evidence market entry was based. No European governmental regulator has it — scientific data sits with the manufacturers and the private companies that “approve” the device. As the head of device regulation in the US, Dr Jeffrey Shuren, said: “For the public in the EU, there is no transparency. The approval [requirements] are just what deal is cut between the device company and the private [organisation].”
Even data about devices that have been pulled from the market is virtually impossible to come by. When the BMJ — together with two doctors from Oxford University — contacted 192 manufacturers of withdrawn medical devices requesting evidence of the clinical data used to approve their devices, they denied us access, claiming that “clinical data is proprietary information”, that it was “company confidential information” and that they could discuss only “publicly available information” — of which there is very little.
Likewise, when we asked the relevant commercial regulatory bodies for the scientific rationale for approval of various devices that had been recalled, the results were stark. This information was classed as confidential because they were working as a client on behalf of the manufacturers — not the people who have them implanted in their bodies.
Even the Freedom of Information Act is of little help in obtaining information on any adverse events. The BMJ/Channel 4 Dispatches attempts to get access to adverse incident reports for specific implantables from the UK national regulator through the act were thwarted because it is overridden by medical device legislation. Article 15 of the EU Medical Devices Directive states: “Member States shall ensure that all the parties involved in the application of this Directive are bound to observe confidentiality with regard to all information obtained in carrying out their tasks.”
Even the Association of British Healthcare Industries, a trade organisation of device manufacturers, agrees that the lack of transparency leads to misunderstanding and mistrust. “Today it is very hard for anyone, even manufacturers and authorities, let alone citizens, to find out what products are approved to be on the market. We would like to see enhanced transparency and information to patients, citizens and all EU government authorities.”
So what does this mean? It means that doctors and patients are left to trust the companies to provide them with information about the benefits and harms of using their products. But with little scrutiny, oversight and transparency, there are no guarantees of this being a fair reflection of what their data — where they have it — actually says.
But there is a movement for change. As Krumholz says: “I think one day people will look back and say now wait a minute. Half of the data were beyond public view and yet people were making decisions every day about these products? How did you let that happen? And I’m not sure how we let it happen.
“But I hope we’ll enter an era where that will be over, and in fact there will be a great sharing of data, that we’ll be able to have a public dialogue that’s truly informed by the totality of evidence, and that we’ll be able to make choices that are based on all of that
evidence, knowing that there are no perfect drugs. That’s always going to be a trade off. But we ought to be informed by all the evidence when we’re making these decisions.”
A fresh round of climate science emails were hacked and released to the public last week. With the debate over secrecy in science back in the headlines, science writer Fred Pearce makes the argument for open access
Steve McIntyre is a pernickety Canadian. A retired mining geologist, trained mathematician and amateur climatologist, he has for the past eight years locked horns with the Climatic Research Unit (CRU) at the University of East Anglia, trying to gain access to their data on the history of global temperatures.
He is not (repeat: not) paid by, beholden to or in regular contact with fossil fuel companies or lobby groups trying to undermine climate change science. He is not even a climate sceptic. For years, McIntyre has been asking for CRU’s “crown jewels”, raw data assembled from weather stations round the world that it says proves how much the world has warmed in the past 160 years.
He does not believe this conclusion is a big lie. But he does want to see for himself. And in particular to look at how the data had been “manipulated” — a perfectly honourable process in which, for instance, some weather stations are made to count for more than others because they represent large areas with few weather stations, while others are discounted because their rural locations have been invaded by growing cities.
It’s not what everybody wants to do on a Saturday night, but surely he is exactly the kind of citizen investigator the 2000 Freedom of Information Act was intended to help.
Of course, his persistence has not made him a friend of CRU’s director Phil Jones. The crown jewels are his life’s work. For years Jones held out, with the backing of his university’s Freedom of Information (FOI) officers, from releasing the data to McIntyre. Jones said it was commercially valuable. He said it was his intellectual property. He said revealing it would damage international relations. CRU has been congenitally hostile to FOI requests from McIntyre and others. At the end of 2009, 105 FOI requests had been submitted to UEA for CRU data, of which only ten had been acceded to in full.
The battle between the two men for the crown jewels was the backdrop — and very possibly the motive — for the still-mysterious hacking of CRU’s emails and their publication online at the end of 2009. Much of the world’s science community sided with Jones in the resulting “climategate” saga, condemning what they regarded as politically and commercially motivated attacks on their research.
But others took McIntyre’s side, seeing him as a data libertarian. And last June, following a new request for the data from Jonathan Jones, an Oxford physicist and “climate agnostic”, the FOI’s commissioner, Christopher Graham, finally ruled that the crown jewels should be handed over. And they were, a month later. The world did not fall in.
If CRU had been more open with its data from the start, a great deal of time and angst on the part of its scientists — and a great deal of public uttering of paranoid nonsense from climate deniers — would have been avoided. And if, in the months before the hack, Jones and his colleagues had not spent ever more time bitching about McIntyre and scheming to keep their data and working methods secret, then the emails would have contained little of outside interest.
Graham’s decision unlocks some four million temperature readings taken at 4,000 weather stations over the past 160 years. But as the journalist Jonathan Jones put it, “the most significant features of this decision are the precedents that have been set”. It could open the door to thousands of other British researchers being required to share their data with the public. Good.
Under the 2000 Freedom of Information Act, universities, like other public institutions, must share their data unless there are good reasons not to. It is now clear that the good reasons have to be just that — not excuses. Graham, who is the final arbiter in FOI requests, was scathing in his ruling that CRU claims that sharing data would harm international relations were “highly speculative”. And on commercial considerations, he noted acidly: “it is not clear how UEA might have planned to commercially exploit the information”.
But should all publicly funded data be free, and all publicly funded researchers required to hand it over? In an age when data distribution is so easy, it is hard to make a case that sharing data is just too hard. After the military, scientific researchers were the first people to use the internet, precisely so they could share large data sets among themselves. So why not let us all join in? But what about emails and research notes and the data from failed experiments? Some believe requests for such stuff would both damage research and overwhelm researchers. And some think your access should depend on who you are.
At the same time as the climategate FOI requests began building up at CRU, the giant tobacco company Philip Morris began — initially anonymously — asking for data from Scottish researchers who had interviewed thousands of teenage smokers on what they thought about tobacco marketing. Not only was this expensive research – paid for by a cancer charity — it was also, as the head of the Stirling University research unit Gerard Hastings put it, “the sort of research that would get a tobacco company into trouble if it did it itself”.
The researchers have held out — and went public with their disgust in the Independent in September. But here’s the bottom line. FOI legislation is “applicant blind”, as Maurice Frankel of the UK Campaign for Freedom of Information puts it. It does not matter if the thoughts of smoking teenagers are of interest to Philip Morris or the National Heart Foundation or someone who wants to stop their child from starting to smoke. They are, and should be, all the same. Otherwise Friends of the Earth would never get pollution data.
In this case, researchers may be able to argue that disclosing the information could jeopardise future planned research, for instance by drying up funds from cancer charities. But Graham’s tough line with CRU suggests that argument is not guaranteed to succeed.
One reason scientists have such a problem with FOI is that virtually none of them realised that it would apply to them. Certainly, the science community failed to consider the consequences or lobby for the drafting of laws that might make sense for them. Only now is the Royal Society trying to catch up by forming a working group to discuss openness in science.
It is also true that there is little consistency among scientists about what the rules on data sharing and confidentiality should be. Some peer-reviewed journals have tough rules requiring access to that data underpinning research papers, but others do not, including some academic institutions. But there is a growing move to more openness that should surely be welcomed. Cameron Neylon, a biophysicist at the Rutherford Appleton Laboratory in Oxfordshire, writing in New Scientist in September, said the aim should be for “anyone, anywhere to contribute to science”. You can hear the shudders in the labs across the land. But to those who fear an avalanche of ill-informed nonsense arising from data sharing, he said: “If you care about the place of science in society or are worried about the quality of information on the web, then openness offers massive potential to engage people more deeply, educate them about how science works and increase the store of quality information on the web.”
In the months after climategate there was much discussion in the science community about the need for greater openness. But outside those in the open access movement, it has faltered. The message in the labs is that the inquiries into the affair absolved the scientists of any wrongdoing.
That is not quite true. The inquiries decided, rightly, that there was no grand conspiracy, although they felt they were not in a position to judge the finer points about the conduct of the science. The main enquiry, under Sir Muir Russell, seemed particularly confused about FOI. It noted damagingly that CRU had shown a “consistent pattern of failing to display the proper degree of openness”. But on the detail it showed a sometimes breathtaking lack of attention. It concluded that “there was no attempt to delete information with respect to an [FOI] request already made”, when the emails published online revealed quite clearly that one round-robin requesting the deletion of an email correspondence was sent two days after an FOI request for precisely that information.
Much of science has “closed ranks” behind the idea that those demanding access to their data are troublemakers. Nobel laureate and Royal Society president Sir Paul Nurse says “some researchers … are getting lots of requests for, among other things, all drafts of scientific papers prior to their publication in journals, with annotations, explaining why changes were made between successive versions. If it is true, it will consume a huge amount of time. And it’s intimidating.” Maybe, but the current law allows vexatious requests to be rejected. So that is a straw man.
In any event, the whole point of research is that it should be open to maximum scrutiny. And the scientific priesthood can no longer claim that scrutiny should only be among their specialist fellows.
And there is sometimes a fine line between the crackpot and the sublime. Earth science guru Jim Lovelock — a doyen for many modern climate researchers — left institutional academia in frustration at his ideas being ostracised. The Independent began its report of the 2011 winner of the Nobel prize for chemistry, Daniel Shechtman, thus: “An Israeli scientist who was once asked to resign his research post because his discovery of a new class of solid material was too unbelievable has won this year’s Nobel Prize in Chemistry – for that same discovery.”
The charge of sloppiness in the way science often portrays its findings to the wider public is also a warning against allowing too much self-policing. In June, the Intergovernmental Panel on Climate Change issued what it said was a summary of the findings of a detailed study of renewable energy. It headlined the claim that 77 per cent of the world’s energy needs could be met from green power by 2050. In fact, the “77 per cent” finding was the most optimistic of hundreds of academic studies reviewed in the report itself. Moreover, that particular study was conducted by one of the report’s own lead authors, who was also a Greenpeace campaigner. Curious. But most damagingly of all, this highly relevant information only emerged a month after the press release and subsequent media coverage, when IPCC got round to publishing the report itself. This was shameful spin.
The fuss over climategate showed that the world is increasingly unwilling to accept the message that “we are scientists; trust us”. Other people want to join the scientific conversation. Good scientists, interested in finding truth, should want to encourage them, not put up the shutters. The wider world instinctively knows to distrust those in all walks of life who reject openness. As McIntyre put it recently, “probably no single issue damages the reputation of the climate science community more than the refusal to show the data that supports their work”. There should, for the good of science as well as public discourse, be a presumption in favour of open access.
McIntyre, meanwhile, is still hunting. He believes CRU researchers using tree rings to unpick temperatures in past eras may have been cherry-picking their Siberian logs to help sustain the argument that recent decades are warmer than anything in the past 2,000 years. He cannot be sure, because they are still refusing to hand over their full data sets. CRU’s justifications have a familiar ring. Disclosure could do “financial harm” to the university by reducing its “ability to attract research funding”. Really?
If McIntyre eventually gets the data, could it undermine the case that man is warming the world? Certainly not; that is independent of past natural variability. Could it change our ideas about past natural climate change? Conceivably, yes. Is it a scandal that McIntyre cannot get to see the data to review CRU’s work and do his own science? I believe it is.
Fred Pearce is author of The Climate Files: The Battle for the Truth about Global Warming (Guardian Books)