- Share to or
Tricky data ‘Sputnik V’ developers respond to Western critics, but the debate might overlook the vaccine’s biggest problem
The developers of “Sputnik V,” Russia’s first vaccine against the coronavirus, have finally responded to a note of concern signed by dozens of scientists who highlighted statistical anomalies in the phase I/II data researchers at the Gamaleya Center published earlier this month. The vaccine’s developers shared their answers in the same authoritative peer-reviewed medical journal where they published their trial data: The Lancet. In the new text, “Safety and Efficacy of the Russian COVID-19 Vaccine: More Information Needed,” Denis Logunov (the deputy research director who leads the group responsible for Sputnik V) and his co-authors explain that the “repeated patterns” in the data flagged by Western colleagues are either the result of coincidences in a very small number of volunteers or in fact not repetitions at all. Meduza compared both sides’ arguments and asked an independent expert to comment on the dispute.
All this in a nutshell. The Russian researchers were accused of possibly falsifying data in their article about their coronavirus vaccine, where study participants’ results for important metrics were suspiciously similar. The Russian developers say the results are entirely possible because the methods they used to conduct these measurements give only rough values, not exact numbers. This problem would not have arisen, they say, if their sample size had been larger. The researchers do not explain, however, why they limited their trials to so few people.
The road to Sputnik V
Any vaccine is tested in two phases: preclinical trials using animals and clinical trials involving human patients. The Gamaleya Center has yet to share the results of its tests on animals; the article published in The Lancet contains data exclusively from human trials.
Clinical trials are typically divided into three separate phases where the drug’s safety is first assessed in healthy volunteers (phase I), then its ability to provoke an immune response is tested against a placebo or the present standard (phase II), and finally there is a long-term efficacy study in key populations to test the vaccine’s protective capacity in the real world, usually by comparing the share of infections in the vaccinated group against the share of cases in the corresponding group that received a placebo (phase III).
The number of volunteers participating in each phase of testing gradually rises from dozens in phase I, to hundreds in phase II, and finally thousands of people in phase III. When determining the exact number of volunteers in any study, researchers base their statistical calculations on “effect-size estimations.” When dealing with clear, unambiguous effects (for example, when testing DNA therapies for genetic diseases), a small group of patients may be enough to demonstrate a drug’s effectiveness. When working with vaccines and trying to reduce the infectiousness or severity of a disease among vaccinated people, however, the number of volunteers in clinical trials should be significant.
Sputnik V has currently completed its initial phase of testing, which its developers are calling a combined phase I/II. Researchers have not yet subjected the vaccine to randomized, double-blind, placebo-controlled trials, however, which means Sputnik V’s testing stage is still in what is traditionally understood as phase I. The drug’s phase III trials kicked off just a few weeks ago and won’t wrap up until late 2020.
Seventy-six volunteers participated in the research published so far about Sputnik V. These patients were divided into two groups that received two types of the vaccine: a frozen formulation and a “lyophilized” (freeze-dried) formulation (technically, these are two different drugs, so they need to be tested separately). Each of these groups of 38 people was then subdivided into another three groups: nine patients received the vaccine based on the human adenovirus type 26, another nine patients received the vaccine based on the human adenovirus type 5, and the remaining 20 volunteers received both injections, three weeks apart.
After the first injections on June 18, scientists monitored patients for side effects and measured several parameters for immune-system responses, focusing primarily on the levels of antibodies against the SARS-CoV-2 S protein gene in patients’ blood and the percent of activated lymphocytes (a kind of white blood cell and also one of the body’s main types of immune cells) of two different groups. One-hundred percent of volunteers developed antibodies to the coronavirus and the level of neutralizing antibodies (those that reliably prevent the virus from entering cells) was comparable to the antibodies found in a group of patients in Moscow who previously recovered from the disease.
In their article published in The Lancet, Sputnik V’s researchers presented these measurements in graphs individually for each volunteer. It was the decision to use figures — rather than raw numerical data — that concerned a group of scientists and doctors in the West, led by Temple University Biochemistry Professor Enrico Bucci.
So what’s wrong with the Sputnik V research?
Enrico Bucci and dozens of other specialists say the data abstracted and inferred from the figures in the Gamaleya Center’s research data show signs of manipulation, deliberate or not. The scientists highlight the unlikely “coincidence of data points” showing antibody levels and shares of activated lymphocytes on particular days of the trials among volunteers from different groups. “[O]n the ground of simple probabilistic evaluations,” Bucci and his colleagues wrote, “the fact of observing so many data points preserved among different experiments is highly unlikely.”
To address these concerns, Bucci and others urged Sputnik V’s developers to release their original numerical data for all experiments and original FACS (fluorescence-activated cell sorting) files in fcs (Flow Cytometry Standard) format to make it possible for other researchers to reproduce their analyses and findings directly.
In his note of concern, Bucci also requested more information about the Muscovites used in the study who previously recovered from COVID-19 (the “convalescent control patients”). Specifically, he asked (1) how they were matched to the different groups of enrolled volunteers in Sputnik V’s trials, and (2) how long after patients first showed symptoms and last tested negative for the disease was their plasma collected.
How did Sputnik V’s developers respond?
In short, they say all the data shared in their article are accurate and reproduced in their original form, without any additional processing. Denis Logunov and his team at the Gamaleya Center attribute the coincidences flagged by Enrico Bucci and his colleagues to possible natural causes, pure coincidence, or inaccurate analysis by Bucci and others.
These “coincidences” break down into three different groups:
- Identical antibody levels in the same volunteers on different days after being vaccinated,
- Identical antibody levels in different volunteers in separate groups, and
- Identical activated-lymphocyte levels in different volunteers.
All these coincidences manifested within or between the subgroups of nine people; there were no duplicate results within the group of 20 people who received both formations of Sputnik V. The Gamaleya Center’s researchers say the discreteness of the measurements used to record antibody levels and activated-lymphocyte levels (in other words, the fact that the data could only take certain values) made these patterns “not improbable” in small samples like Sputnik V’s first clinical trials.
When using “enzyme-linked immunosorbent assay” (ELISA) to measure antibody levels in patients, samples are always diluted multiple times, limiting the accuracy of findings by the “ladder” of values. Bucci and his colleagues acknowledge that this could partially explain the coincidences, but not when it comes to overlapping activated-lymphocyte levels, which are measured using special equipment. Sputnik V’s developers maintain, however, that they also reported activated-lymphocyte levels with as much accuracy as their equipment can manage: up to 0.1 percent. In other words, the value “ladder” in fact limits these data, as well.
After clarifying the limitations of the methods used in their study, the Russian researchers argue that the coincidences observed in their figures are due either to pure coincidence or to changes in patients’ antibody levels too small to be detected. For example, in Figure 2A, the blue-highlighted overlap between different groups occurs along just nine points located at five positions on the value “ladder.” Bucci and his colleagues say this kind of duplicate data is “highly unlikely,” whereas Denis Logunov and his team say it is “not surprising.”
Sputnik V’s developers admit that the small number of volunteers recruited for their study represents a significant limitation to their research, but they do not explain why they decided to restrict their so-called “combined phase I/II trials” to such a small sample of patients. (Scientists working on other coronavirus vaccines have involved far more people in phase II tests.)
Do these responses hold water?
Sputnik V’s researchers presented none of the additional information requested by critics and merely commented further on the data they already published. According to Denis Logunov and his colleagues, there simply is no more information — the researchers already shared all their raw data in their original article. One exception might be the technical data from the equipment used to measure subjects’ activated-lymphocyte levels, which Logunov and his team did not address in their response. In immunology, however, such technical data isn’t typically published, and it’s unknown if the researchers saved it at all.
Meduza tried to contact Enrico Bucci to find out how satisfied he and his colleagues are with the Gamaleya Center’s feedback, but he did not respond to messages (though he has been happy to speak to Russian journalists in the past). We also reached out to 13 experts who signed Bucci’s letter. Konstantin Andreev, a postdoctoral student at Northwestern University in Illinois and the only scholar who answered, told Meduza that Logunov’s team is still ignoring multiple concerns, particularly questions about control patients’ demographic data and other characteristics.
Asked to approximate the statistical likelihood of the coincidences Enrico Bucci calls “highly unlikely” (the same duplicate data Denis Logunov says are “not surprising”), Konstantin Andreev estimates that the odds are 1.4 percent in the framework of a hypothetical case where antibody levels randomly take eight possible values. “The fact that such a coincidence doesn’t surprise the authors perhaps says only that they’re hard to surprise,” Andreev joked.
“We can’t say if these data were deliberately processed or not. We simply note that the researchers did not respond to our concerns about suspicious overlapping data points in their figures and did not adequately describe the features of their control group. The source data [from the equipment used to measure lymphocyte activity in the original fluorescence-activated cell sorting format] could help resolve most of these issues, but the authors did not provide this information,” explained Andreev.
So whose argument is better?
In comments to Meduza, epidemiologist Anton Barchuk stressed that the duplicate data flagged in the Sputnik V research isn’t the vaccine’s biggest problem.
“All the talk about falsified data distracts from this study’s real problems. I think few people doubt that the vaccine is immunogenic, but such a small sample in the study means there’s a chance that they missed side effects. We also need to understand that the result of this work isn’t evidence of effectiveness but merely a pass to the final phase of vaccine research that still needs to be carried out. And here it will be difficult to operate with a small sample when other manufacturers are planning [to test with] tens of thousands of participants. From any point of view, publishing falsified phase I/II data would be a senseless exercise. If it had been necessary to show formally in the domestic market that the vaccine works, it would have been enough to publish in some unknown journal inside Russia. Publishing phase I/II data in The Lancet, as I explained already, doesn’t make the vaccine effective, and everyone around the world understands this.
“I think it’s worth supporting our colleagues for not going the first route and publishing instead in a major journal, despite the limitations [of their data]. You can’t even say that [Russian] clinical science has a bad reputation because really it has no reputation at all. There are some biologists and bioinformaticians who publish here and there, but it’s been failure in clinical and population studies, so I can only welcome at least one of our few publications.”
Translation by Kevin Rothrock
- Share to or