Wednesday, April 22, 2020

The numbers out of New York City are way better that I thought possible

A friend in New York, Paul Brill sent me some very encouraging charts.  These show deaths and hospital admissions related to COVID-19 in New York City, and the drop-off in daily counts since the peak is much steeper than I imagined possible. The people of New York are doing a great job of keeping up their social distancing (I guess), and the people who aren't doing so well at that seem to have not significantly slowed the positive trend. If we could get the whole country to have this steep of a decline (through social distancing, wearing masks when you are in public, and so forth), we could really shut down this virus for a while.

The deaths peaked on April 7th, and then had a slow decline until the 11th, but have really been dropping fast since then. This is great news.


New admissions to the hospital were running high with no trend from March 30th to April 8th, but since then, new COVID-19 admissions have been declining at a fairly steady rate. The drop from April 17th to Monday was especially encouraging. I had no idea we could see a fall-off in new admissions occur at such an encouraging rate.


 The change in rate of positive identifications of persons with COVID-19 isn't as accurate an assessment of reality as the hospitalization and deaths, but even so, it is showing a similar trend, and the trend has been very good since April 14th.  Look at what the people of New York accomplished by staying home and maintaining social distance.  We can all do this.


Harvey J. Stein has some even better charts that give us a similar picture of the overall trend:




Tuesday, April 21, 2020

Blood test studies suggesting much higher rates of SARS-CoV-2 infection are flawed, but we need more blood antibody tests, so keep at it.

Three recent studies suggest many more people have been exposed to SARS-CoV-2 than mainstream medical science has estimated. 

A Stanford group sampled blood from 3,330 persons in Santa Clara County and found 50 of them had antibodies for SARS-CoV-2. There are three problems with their research.  The biggest fatal flaw is that they recruited their sample by targeted Facebook advertisements, and there is a probability that persons suspecting or knowing the study had something to do with COVID-19 enrolled in the study in the hopes of getting their blood tested to find out whether they had been exposed to the virus.  On the other hand, the Stanford researchers under-sampled persons who don't use Facebook or didn’t have time or ability to drive to the site to get tested, so that probably would lead to an under-count of poor and minorities, but it’s hard to say by how much, and the higher rates of infection in poor and minority communities may have occurred later in the outbreak, with the initial infections being more common in people who were more socially connected to persons who had been in China or Europe recently (wealthier persons like the sample in the Stanford study).  Who can say?  To their credit, the Stanford team pointed out these sampling problems: The authors admit these weaknesses, in the paper’s caution that: “…other biases, such as bias favoring individuals in good health capable of attending our testing sites, or bias favoring those with prior COVID-like illnesses seeking antibody confirmation are also possible.” 

A second problem is that their conclusions include a mathematical error in estimating the lower bounds of the confidence interval for the actual rate in the population (even assuming their sample was a representative sample and not biased by eager volunteers who had reason to suspect they had been infected). Combining the Stanford group studies of false positives on known negatives with the data from the test manufacturer, they had a total of 2 false positives in 489 known negatives, so they  should have rescaled the uncertainty due to the number of false positives by 1/square-root-of-489 [0.045], but they mistakenly calculated the lower bound error estimate by 1/square-root-of-3,330 [0.017]. When you adjust correctly, their lower bound confidence estimate for the prevalence in Santa Clara County should be about a third lower than it is. 

A third problem is with the test they used and whether it has too many false positives. The test they used has a reported 1-in-245 false positives (for every 245 persons who don't have the antibodies you’ll get a false “hit” that they do). But another study examining a different test from the same manufacturer found a much lower sensitivity, and even ceased testing one test from that manufacturer when they had two false positives in 15 known negatives.  Different test, but same manufacturer, so one wonders how good the quality control is and whether all the batches of tests from that company are uniformly at the 1-in-245 rate of false positives.

Another study led by Hendrik Streeck involved checking for antibodies in a German town with about 12,500 inhabitants. That study found antibodies in 70 of 500 persons tested (14%). There are a few problems with this study.  First, the test that Streeck’s team used was tested by a Danish research group headed by Ria Lassauniere, and they found that the test has a lower sensitivity than Streeck’s group claimed, and according to the data from Lassauniere's group, the actual count by Streeck could have been 58 positive instead of 70. Streeck and Lassauniere’s groups have conflicting results on the sensitivity (false positive) question, and even so, there are still a lot of people in Heinsberg with the virus.

Another problem is that Streeck’s team sampled households, and since the virus is more likely to spread in households, the rate of infection is likely to be higher in households than in individuals (groups of people living together are at greater risk than individuals because if any one of the group gets it they are likely to spread it to others).

Finally, the region sampled by Streeck’s team had a big event (like the New Orleans Mardi Gras) that brought a high number of the community together when the virus was spreading, so that region probably has a much higher rate of infection than Germany as a whole.

Another study was conducted by Massachusetts General Hospital pathologists in Chelsea.  They specifically chose to do their work in Chelsea because the hospital was seeing so many cases from that area. They approached people on the street and asked to get a blood sample, but did not give people their results, so they eliminated the bias the Stanford team probably had from attracting people who wanted to be tested. But, they were approaching people out on the street, and this is at a time when people who want to avoid the virus and have the ability to do so are not out on the street much, so the fact that their sample came from people out on the street may bias their results. And, they picked their sampling process because they expected to find very high rates of infection in Chelsea, so you can’t really use that sample to estimate what is going on in Massachusetts or the United States in general.  The Chelsea sample was small (200 persons) and the 63 positive cases could be too high because the test they used has a relatively low specificity (the manufacturer says it has 90%, which would suggest a lot of false positives, but Massachusetts General says their testing showed specificity of 99.5%, so very few false positives). The main point is, people walking along the street when we are supposed to be sheltering in home aren't a random sample, and Chelsea probably is a place with a high infection rate, so the sample isn't very good for estimating the infection rate of the United States.

My earliest estimates about COVID-19 in mid-March took a fatality infection rate of 0.8% as plausible and I did my modeling using that estimate.  Rates of death based on antibodies found in blood are still not well understood because these three studies are too early and preliminary, and are too flawed in their sampling and their testing specificity to give us an accurate understanding of how many people have been infected (and what the death rate is).  Many estimates now are suggesting infections lead to death in 1% to 2% of cases. These studies suggest (even despite their flaws) that the death rate may be closer to 1% than 2%, and given the high number of persons who are not showing symptoms, that makes sense to me.

There are two facts that really contradict these studies in terms of how lethal infection from the SARS-CoV-2 is.  First of all, we have some cases where almost entire populations have been infected (like cruise ships, or hospital staff at hospitals in Wuhan), and from those cases we can see death rates in that 1% to 2% range, not down around 0.2% or even 0.5%. Secondly, COVID 19 has been around in Europe and North America for about ten weeks, and has already killed far more people than the seasonal flu ever does in such a short time. Over the course of a year, the seasonal flu virus probably infects between 10% and 35% of Americans, and kills tens of thousands of us.  Even these very flawed studies are suggesting that SARS-CoV-2 has infected fewer persons than the seasonal flu, but we are seeing death rates far, far higher than the flu. For example, the seasonal flu never kills 1,500 to 2,000 persons per day in the USA, and COVID-19 has been doing that for the past couple weeks despite the fact that most Americans have been socially isolating for over a month. These actual facts tell us enough to understand that estimates of death rates of 0.1% (similar to the flu) are not plausible. We can also look at societies with mass testing (like Korea, Taiwan, etc.) and see that death rates are well above the common flu level of 0.1%.  And, COVID-19 seems to do a lot more damage in a lot more ways than the flu.  People with mild cases of COVID-19 may be suffering permanent lung damage, liver damage, kidney damage, neurological problems, and other things we don't typically see with the flu.


There are other issues with the Stanford study. Eran Bendavid, Andrew Bogan, and Jay Bhattacharya are among the co-authors in that study, and I think they have somewhat damaged or even ruined their reputations among scientists (or anyone who understands issues of sensitivity and selectivity in testing, or issues of getting representative samples, or ethical issues of revealing conflicts of interest). In other words, scientists who look at these papers and what has been going on around them are going to be dismissive of the people involved, who sometimes seem to be motivated more by an ideological agenda than a desire for an accurate understanding of what is going on.

Back in March (on the 24th of March) I recall reading an idiotic editorial in Wall Street Journal (which is to be expected; editorials in the WSJ typically show little evidence of sound thinking or awareness of facts). In that editorial Eran Bendavid and Jay Bhattacharya (associated with Stanford, and members of the Stanford Group that has done the research in Santa Clara county) were suggesting that SARS-CoV-2 wasn’t so deadly, because on March 19th a study of 450 NBA athletes showed that 10 of them had already been exposed to the virus. They went on to make the ridiculous suggestion that NBA players were sort of like the general American population. That was ridiculous because NBA players breathe heavily, sweat, and stand close to each other, bump into each other, and so forth as part of their job. Their infection rates therefore ought to be much higher than the general population. Their editorial ignored this fact, which was essentially scientific malpractice (perhaps the editors at the WSJ trimmed their editorial and cut out the caveats). 

Jay Bhattacharya said in an interview that “I think we have a very, very strong responsibility to be utterly honest as we can be about what we know and we don't know” That is good, but what about the language in their draft paper and the writing about it? Is it honest about the degree of confidence we can have in their work?  To their credit, the Stanford Team admits in a pre-review paper, that “If new estimates indicate test specificity to be less than 97.9%, our SARS-CoV-2 prevalence estimate would change from 2.8% to less than 1%, and the lower uncertainty bound of our estimate would include zero.” That’s good science. Yet, the article in Nature (!) By Smriti Mallapaty begins with this:

“Widespread antibody testing in a Californian county has revealed a much higher prevalence of coronavirus infection than official figures suggested. The findings also indicate that the virus is less deadly than current estimates of global case and death counts suggest.”

However, the Stanford group’s tests did not reveal a much higher prevalence, they revealed a possibility—one very much diminished in its likelihood if their blood test was not especially accurate or their sampling method was biased—that prevalence is higher than estimated. The study also suggests that the virus may be less deadly than global case and death counts suggest, but that is already a widespread assumption, since testing and case counts are highly biased in terms of only including persons with the most severe cases. That is, we are seeing about 7% of known cases with people entering hospitals and clinical care ending in death, but no one really thinks the virus kills 7% of the people it infects. Everyone understands that most people who get COVID-19 never show up for clinical care, and some portion of persons infected with the virus (estimates still vary between 20% and 80%) remain asymptomatic for a long time (or possibly for the full course of their infection). These three flawed studies suggest that the actual percentage of infected persons who are pre-symptomatic or have no symptoms may be closer to the 80% estimate than the 20% estimate, but even so, basing our estimates of actual infection rates in the population depend much on our counts of sick persons, dying or dead persons, and positive tests from the nasal swabs and so forth, and we're not testing everyone, and we're not even counting all the people who die from COVID-19, so we still don't have a firm grasp on what is going on. 

How sensitive and selective was the test used by the Stanford group?  The authors say they tried it first on 30 controls, and then on 88 controls, and it never had a false positive with those controls. That's good.  That’s 100% sensitivity in 118 cases. The manufacturer of the test reported sensitivity of 99.5% and 99.2%, also good. However, the Stanford study’s test kit was very similar to ones evaluated in another pre-review paper (not yet peer reviewed) by Ria Lassauniere and colleagues that showed that the accuracy of negative values were 91%, 89%, 89%, and 74% on tests of 89 controls (persons who had never had COVID-19).  The test on 2019-nCoV IgG/IgM Rapid Test Cassette (Hangzhou Alltest Biotech, Hangzhou, China; Cat # INCP-402) had 2 false positives out of 15 persons known to have not been infected. The Alltest Biotech assay was such a poor performer that Lassauniere’s group stopped testing it. The test used by Battacharya and Bendavid and Bogan was from Alltest Biotech, but was a different test, and didn’t give a single false positive on 118 control samples, so Alltest Biotech must have come out with a better version of their test than the one examined by Lassauniere's team, but I still wonder how many of the 50 positives Bendavid, Battacharya and their team found in their sample of 3,320 were false positives. We really need to see these antibody sero-prevalence tests standardized with tests on hundreds or thousands of controls to get a better understanding of their true sensitivity. The manufacturer of the kit used by the Stanford group reported 2 false positives in 371 true negatives, a rate of 99.5% accuracy for specificity. Let's just say, the idea that the test used by the Stanford team had a specificity under 97.9 percent is plausible, but the testing on a total of 489 known negatives that turned up only 2 false positives can give some confidence that their test is probably not giving us a high number of false positives. My conclusion; the sampling issue is magnitudes more significant in undermining their findings than problems with false positives from their tests, but that is also a concern.

What about sampling?  If you are going to extrapolate from a sample of 3,320 persons to make a conclusion about the population from which they were drawn, you must try to get a random sample or at least a representative sample. Bhattacharya already has shown a weakness for misunderstanding representative sampling in his WSJ editorial in which he suggested NBA players are a reasonable sample to use for making speculations about the populations of cities hosting NBA teams.

Well, the sampling in the Stanford Study relied on Facebook targeted advertisements, and as I already quoted, the authors themselves admit that people may have wanted to be tested, and that may have contaminated or biased their sample.  Sure, of course it did.  In early April it was difficult to get tested. If I had been having some symptoms and couldn't get a nasal swab test and someone offered to test my blood to see if I had antibodies, I would have been eager to be in that sample, and I would have told all my friends with similar symptoms about the experiment and urged them to get into the sample as well. If anything like that happened in Santa Clara County, the Stanford Group’s study is not likely to give us accurate information. 

Another problem I have is with one of the Stanford team’s co-authors named Andrew Bogan. He wrote an article in the Wall Street Journal in which he discussed the Stanford group’s study, and never mentioned that he is a co-author on that study and part of the Stanford group (!), and that is not ethical. It makes the whole Stanford team look bad. 

If you want to see someone else take down this study, check out the video by Chris Martenson

Monday, April 20, 2020

The USA may have peaked in deaths per day from COVID-19 (sometime around April 16-19)

The USA was supposed to reach a peak in mid-to-late April in terms of the COVID-19 pandemic caused by the SARS-CoV-2. I have not been paying attention much to the "new cases" or "total cases" since those numbers in the USA reflect how many people are being tested and what criteria are being applied to decide who will be tested, and the results may not reflect the actual spread of the virus very well.  I have instead been paying attention to deaths attributed to COVID-19.  That said, even the data on that are highly suspect; many persons die without being tested, and it is rare to waste a precious test on a cadaver. When epidemiologists compare deaths this year to deaths a year ago, any increases probably show the influenced of COVID-19. The non-pharmaceutical interventions (almost everyone staying home) reduces communicable diseases such as the seasonal flu and deaths related to people being out and about (drunken homicides related to fights at bars, traffic deaths, etc.), so keep in mind that deaths-per-day related to COVID-19 are still rough approximations of the problem. I've observed four sources for estimates of deaths per day: the World-o-Meter Covid-19 update seems pretty good. Johns Hopkins University is another good source. Our World in Data and CNN also have some good maps (but their color schemes don't offer enough contrast) and data sources.

Here is a chart from Johns Hopkins showing deaths per day in the USA:

I downloaded data from NBC, put them in a spreadsheet, and ran the death rate with five-day averages (every day's death count was replaced with the average of the previous two days, that day, and the following two days) to smooth out the chart, and came up with this:
The data sources are different, so the charts look a little different, but both charts show a decline just now (April 19th and 20th seem to have had fewer deaths than the preceding several days).  This may be a sign that we have actually peaked in our deaths-per-day, or it could be something like the dip in deaths-per-day that the NBC data have for April 10th-12th.  

Deaths seem to come (on average) about 18 days after people get infected, but the range is huge (something like 15-45 days from infection). Because the length of time between infection and death isn't normally distributed (it is very skewed; almost no one dies within 10 days of infection, and most die in the 12-24 days after infection, but still a lot of people die in the 25-50th day after infection), we should expect the decline in deaths per day to be much more gradual than the steep increase we saw in March and April up to the 16th of April. 

The orders to stay at home and the way people have been obeying those orders, as well as the habit of wearing masks in public have seemingly helped us as a country stop the growth in deaths per day.  It is important to wear masks in public because a quarter to three-quarters of persons infected by the virus are not showing symptoms because the virus is slow to express itself in symptoms and many persons (currently plausible estimates made by experts and researchers vary from 20% to 80%) never do show symptoms never do show symptoms, but asymptomatic and pre-symptomatic persons can spread the virus. Wearing a mask reduces the water droplets that float in the air around you as you breathe or speak, and thus reduce the virus count you are putting out (if you are infected) into the air around you.  Inhaling a billion viruses is likely to cause a worse problem than inhaling a thousand viruses, so please wear the masks to show your solidarity with everyone and show your recognition of the fact that it is easy to be a carrier of SARS-CoV-19 without knowing it.