top of page
Search
Writer's pictureJ. Falatko D.O.

Journal Club: Testing Testing Testing! We need more Testing!




J. Falatko

“Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction – Based SARS-COV-2 Tests by Time Since Exposure”.


“We need more testing!” Is often what you will hear as one of the pillars to control this pandemic. From the very beginning the US has been criticized for poor execution of its testing strategy. If we just had abundant, fast, and accurate tests this would all be over. I think it’s a bit more difficult than this.

If you have spent much time with me during the course of the pandemic you would probably get the impression that I think testing is a bad idea. For the most part I have been against broad testing campaigns, testing in asymptomatic individuals, and the idea that anyone who wants a test should be able to get one. I’m for smart testing strategies that optimize resources and give the test an opportunity to provide reliable results. Some evidence is starting to show up in favor of this strategy. Although, you probably won’t hear about it on the news.

Test results are not simply positive or negative. A test, its result, and the patient form a complex decision tree. To many people, including some physicians (and “experts”), there is little thought beyond the reported result. However, a test actually has 4 orders of information required for interpretation. First order: the result. Second order: accuracy of the result, known as sensitivity, specificity. Third order: the likelihood the patient has the disease based on the result, known as the likelihood ratio. Fourth order: predictive value, or the difference between the pre and post test probabilities.

This could be one of my favorite COVID articles because it addresses one of the highest orders of test interpretation. The authors were unable to provide sensitivity and specificity data and therefore likelihood ratio data, but they did discuss and predictive value/ pre-and post-test probabilities. This article focused on the predictive value of a negative test result. Stick with me here. I hope to simplify it if this is your first exposure to these terms.

Just so everyone is on the same page. In the case of a false negative, the patient actually has the disease even though they tested negative for it.

In the study the authors combined data from 8 other studies. Serial testing of exposed and symptomatic patients was performed in the underlying studies. Positive cases were those that tested positive in 6 of the studies. The other two studies included presumed positive cases based on symptoms and imaging tests. The “presumed positive” patients had antibody testing after clearance of the infection. Most of them were positive for either IgM or IgG antibodies.

The authors performed 3 separate sensitivity analysis on their data. Remember, a sensitivity analysis is used to assess for an unknown variable that has significantly affected the results. They predicted that the most likely day of symptom onset was day 5 after exposure. It is assumed that symptoms correspond with adequate viral shedding to detect the disease. In case this was inaccurate, they repeated their calculation with day 3 and then again day 7. The second sensitivity analysis was to assess for a lower specificity of the test than reported. A high specificity is your true positive rate. At 100% there are no false positives. Since a false positive is counted as a negative, it could have inflated the results. A sensitivity analysis was conducted for 90% specificity for this reason. The final sensitivity analysis was to evaluate the effect of each individual study on the results to ensure one did not hold too much weight. This was done by eliminating the studies one by one and seeing if the results changed.

This is what they found….

A total of 1330 samples were collected. Day one after exposure the probability of a false negative test was 100%. On day 4, the day prior to symptom onset, the probability of a false negative was 67%.

The best day to test was day 8 after exposure. The false negative rate was 20% with the narrowest confidence interval (12-30%). This is 3 days after presumed symptom onset. The false negative rate began to increase again after day 8 up to 67% on day 21 after exposure.




Next, they calculated the effect a negative test would have on the post-test probability that the patient is truly negative. The post-test probability is the likelihood that the disease is present based on the underlying base rate (pre-test probability). The principle is based on the underlying prevalence of the disease and true negative rate. A 100% post-test probability indicates they most certainly have the disease, 0% probability means they certainly do not.

In the study they used an “attack rate.” This was defined as the likelihood to get infected after exposure. Assuming the attack rate in the population was 11% they were able to build this graph. The 11% attack rate was based on a large study of household contacts.




The highlighted two days are, day 3 and day 8. The blue line represents the attack rate of 11%. A negative result on day 3 would only reduce the probability from 11% to 10.9%. While on day 8 the negative result shows a 2.5% probability.

So, if someone is exposed, with the equivalent of exposure being equal to living in the same house as someone with the virus, there is an 11% chance they were infected. If I test them on day three, and their test is negative, there is still a 10.9% chance they are infected. If I test them on day 8, and its negative, there is a 2.5% chance they are infected. If your head is spinning, read that again a little slower.

Next, they tested different “attack rates” to see if it the predictive value of the test improved. This was done because their attack rate assumption was based off of 1 study. They tested 44%, 22%, and 5.5% attack rate (equal to 4x, 2x, 0.5x respectively).

The post-test probability is on the Y axis with the curves representing to pre-test probability (base rate), so you can visually see the change in predictive value.



To compare to the days above, the red line represents day 3, the green line represents day 8. Even at a higher probability of infection, a negative test result at day 3 provides no valuable information. At the lowest probability of infection (5.5%), a negative test on any day does not change the probability of infection very much. Again, day 8 of exposure is demonstrated as the most valuable day to test since a negative test results is the most reliable on this day.

The three sensitivity analyses they ran did not change the results much. When they eliminated individual studies, the results did not change. When they moved the incubation period to 3 days instead of day 5, they found a steeper decrease in false negative rates. You can imagine the results on the curve shifting from day 8 to something like day 6 in this scenario. When they moved the incubation period to 7 days, the opposite happened. When the specificity was reduced to 90% the results were very similar.

If you’ve made it this far, you’re probably looking for a succinct summary to all the jargon above. Two takeaways:

1) If you tested negative on a day before you are likely to be symptomatic there is no information gained from a negative test.

2) If you test negative after a day you should be symptomatic, say day 5 after exposure, your negative test is much more reliable.

As a practicing physician in this pandemic I get requests from patients and employers all the time. A person was exposed, they have no symptoms, and they cannot return to work until they have a negative test. Most of these patients did not have close exposures. Their exposure was much less, likely lowering their pre-test probability and the predictive value of the test. If they have had a close exposure, and its less than 5 days since the exposure, the negative test is worthless. If their likelihood of infection is low, because they did not have a significant exposure, and they have no symptoms, it’s highly unlikely they have the virus. Good luck explaining this to someone.

The most common response I get goes something like this, “on the news it said anyone who wants a test can get a test.” That doesn’t mean your test result has any value. This is the importance of pre and post-test probabilities and predictive value. The test performs better based on the likelihood you have the disease. As seen in figure 3, if the likelihood you have the disease is low, and you have a negative test, there is no new information added. The test did not perform well. If your pre-test likelihood is high and you have a negative test after day 5, the post-test probability drops by 80%. Still not zero, but much better than where you started.

The science makes sense. If you’ve been exposed and you are asymptomatic, that is likely because your body has not detected the virus yet. Most of the symptoms from viral infections come from your own immune systems effort to eradicate them. There needs to be enough viral particles floating around in order for your immune system to detect something foreign and fight it off. Same thing for the test, it relies on enough viral particles being present to be picked up by the swab, then replicated by the assay for detection. Because of this, timing, and technique are important variables in testing. As well as having a high or low suspicion for the disease. Your pre-test assumptions are the basis for your confidence in the test result.

Certainly, there were some flaws with the study. This was a meta-analysis of sorts, but not really done per typical protocol. The underlying studies were not assessed for quality. It is unknown if other studies exist. There was variability in the test used and testing method between the studies. Two of the studies used presumed positive cases. When the authors systematically removed individual studies from their calculation it did not affect the underlying results, suggesting that one study did not carry significant weight. Also, there was no assessment if the patients experienced repeat exposures.

This article was very interesting. I didn’t go into great detail about the validity of the study as I normally do because I wanted to highlight some of the skills involved in test interpretation. I would be welcome to any thoughtful rebuttal. I will post any rebuttal on the site directly above this article. This paper focused only on false negative. Can’t wait until we get some assessment of positive results. I’m sure that data will be very interesting.

42 views0 comments

Recent Posts

See All

Comments


bottom of page