AstraZeneca have published a press release stating that the primary analysis of the Phase III trial of the Oxford-AstraZeneca vaccine in the US have confirmed vaccine efficacy consistent with the pre-specified interim analysis announced on Monday 22 March 2021.
This explainer is more than 90 days old. Some of the information might be out of date or no longer relevant. Browse our homepage for up to date content or request information about a specific topic from our team of scientists.
The press release from 25 March updates the information in the press release from 22nd, and confirms the findings in the previous one.1 2 It thus provides more good news about vaccines in general and the Oxford/AstraZeneca vaccine in particular. It is consistent with previous studies of the vaccine.
The press release describes a large phase III trial conducted in the US, Peru and Chile, with 32,449 participants of all ages and of different ethnicities, two-thirds of whom were given the active vaccine, and the other third a placebo vaccine (no doubt the full peer-reviewed papers will tell us what the placebo contained). It gives us information from an interim analysis of data from these trials.
The new press release tells us that there were 190 cases of symptomatic disease in participants, and that the vaccine showed the following characteristics:
76% vaccine efficacy against symptomatic COVID-19
100% efficacy against severe or critical disease and hospitalisation
85% efficacy against symptomatic COVID-19 in participants aged 65 years and over
I won’t repeat the comments I made about the earlier press release – they remain valid.3 The difference is tiny, and to be expected with the number of cases analysed. We have not seen the confidence intervals, but they will almost certainly overlap, with this difference not being statistically significant.
It is important, however, to comment on the way the news was announced, the way science is (or should be) done, and the NIH/NIAID/DSMB intervention.4
Science is to a large extent about reducing bias. Ensuring that what you are measuring is due to (what you’re investigating), and not due to chance or, worse, something about the way you collect data. For example, ‘blinding’ is used to ensure that the system for assessing whether participants become ‘cases’ (and their degree of severity) is not biased by knowledge of whether they were vaccinated. It may not be an issue with clear-cut cases; but where it’s a marginal decision it would be all-too-easy for an assessor to take into consideration their belief that the participant had been vaccinated and therefor they were was less likely to be a case, or vice versa; and this would bias the findings in favour of the vaccine.
Similar issues apply to the period in which you collect data. Let’s assume that the period of the trial was such that they had about four cases a day, on average. The play of chance means that on some days there would be six or seven cases, and on other days there would be none. And, similarly, on some days all the cases would be in the vaccinated group, on other days they would all be in the unvaccinated group.
If you can pick any specific period for your data collection you could easily decide to move the period to ensure that you minimise the number of cases in the vaccinated group – imagine for example, that on 20 Feb there was a small excess of cases in the vaccinated group. You might be tempted to run the data collection period to the 19th, instead, so that these cases, which make the vaccine look less efficacious, are excluded… Or, if you wanted to criticise the vaccine’s efficacy, you might decide to say that if the trial had been intended to finish on the 19th, it should have been continued for an extra day, to make the vaccine look less efficacious.
To prevent such bias, scientists are likely to state BEFORE THEY START COLLECTING THE DATA the dates over which the data collection will be collected; and this will be clearly stated in the papers they publish.
The problem with science-by-press-release is that we cannot see such details. We do not know if AstraZeneca, quite properly, set its data collection period; and by chance the data for a few days afterwards showed a slightly greater number of cases in the vaccinated group, and this is what DBMS was saying; or whether AstraZeneca might have censored the data collection period. (My suspicion is that the former is correct.)
Of course, if this were known, it would probably be mentioned in the discussion in a scientific paper; but press releases are generally kept simple and uncomplicated.
Either way, by imputing that the data were incorrect, and by implication that the vaccine might be very much less efficacious than the press release said (whereas the revised data are unlikely to be significantly different), in my opinion the DBMS and NIH statements could harm confidence in this vaccine in particular and, more dangerous still, to vaccination in general.”
Press releases are not a good form of scientific communication. It seems that the stock market requires them, perhaps understandably, to prevent insider trading when results affect share prices. This is not good, ever, and while in a pandemic there is an obvious need for rapid dissemination of results, it would be much better for public health to await regulatory assessment of the full results, which are properly set out in relation to a pre-specified protocol. Any vaccine will not be available until after regulatory approval and knowing interim results does not in itself bring public benefit.
As noted on multiple occasions, values of vaccine efficacy do not have the precision ascribed to them when ‘headline’ numbers are used. Care must be taking in using them to compare vaccines which have not been compared in a randomised trial. The DSMB were aware the most recent data had less efficacy than the first press release said but, given the details of the relatively minor differences described in the latest press release, it seems an unnecessary action to have raised concerns in public. Results fluctuate as data accumulate and that is why there is a pre-specified protocol to set out what will be done for a regulatory submission and for publication. What counts will be the FDA assessment and that will be done based on scrutiny of the full data and not press releases. With completion of the adjudication of events it is possible, even likely, given the public statement from the DSMB, that the ‘headline’ value could move slightly further downwards but such variation is not unexpected.
There seems to be a breakdown in relations between the DSMB and the company which is probably due to a variety of factors and is sad. This vaccine is so important for global health and the disputes do not promote global health.
DSMBs have primary responsibilities towards the patients in the trial they are monitoring and also, in my view, towards the general public. There may be hidden reasons for their actions, but so far it will have had serious damage to public confidence in this, and possibly all vaccines, globally and it is unclear what benefits to anyone the DSMB’s announcement will have had. It would be better if this dispute could be laid to rest and the US and other regulators left to make their objective assessments of complete data without the glare of publicity. It does not make regulators’ tasks easier.