What we are looking for the the probability of a person having Corona given a positive test and the test is 90% accurate.
For brevity:
A = have Corona
B = have positive test
P(A) = 0,01 – base rate of contagion in the population. And by implication P(not-A) = 0,99
Test is 90% accurate, and let us further suppose that this is both the sensitivity and the specificity. Where sensitivity is about the absence of false negatives – are you finding it all ? – the proportion of actual positives that are correctly identified as such. And Specificity is about the absence of false positives – are you detecting something else ? – the proportion of actual negatives that are correctly identified as such. Ideally both should be high, but typically one will have to be traded off against the other in any real test. Both being 90% is unrealisitc.
For short P(B|A) = 0.9 and P(not-B | not-A) = 0.9
From this we use the fact that conditional probabilities add up to 1
i.e. P ( B|A ) + P(not-B|A) = 1
and get P(not-B | A) = 0.1 and P( B | not-A) = 0.1
Bayes’ Theorem stipulate P(A|B) = P(B|A) * P(A) / P(B)
Where P(B) = P(B,A) + P(B,not-A) = P(B|A) * P(A) + P(B | not-A) * P(not-A)
Putting in the numbers from above:
P(A|B) = ( 0,9 * 0,01) / ( 0,9 * 0,01 + 0,1 * 0,99) = 0,009 / ( 0,009 + 0,099) ~ 10%
In plan language: The probability of actually having the Corona given a positive test with a 90% accuracy, is only about 10%
Let’s recast: what is the chance of not having Corona given the test say you don’t. In other words: how reliable is the all-clear ?
For brevity: What is P ( not-A | not-B) ?
Bayes’ says P(A|B) = P(B|A) * P(A) / P(B)
which for this purpose becomes:
P(not-A| not-B) = P(not-B| not-A) * P(not-A) / P(not-B)
Where P(not-B) = P(not-B, not-A) + P(not-B, A) = P(not-B| not-A) * P(not-A) + P(not-B | A) * P(A)
All of these numbers we have
P(not-A| not-B) = 0,9 * 0,99 / ( 0,9 * 0,99 + 0,1 * 0,01 ) ~ 99,9%
Which is rather impressive. The all-clear really is all-clear. No means no.