Goodbody Clinic: "Lung Cancer Testing"

Time to talk about Goodbody Clinic. They offer various blood tests for cancers. Their “lung cancer testing” page, in particular, appeared in my Sponsored Links this morning. Let’s have a look, shall we?

Goodhealth Lung Cancer Screening landing page

Goodbody Clinic Lung Cancer Testing; PDF

Lots of bold claims. Where have they come from? Well, they appear to be derived of a 2021 study, which evaluated the EarlyCDT blood test in Scotland.1 The promotional material for the EarlyCDT test claims that it can detect lung cancer years before clinical diagnosis. But it’s not quite so straightforward.

TL;DR Goodbody Clinic’s promotion of the EarlyCDT test is misleading because it overemphasises a 99.3% “all-clear” rate, without mentioning that the test misses many actual cancer cases due to its low sensitivity. This gives a false sense of security, especially for high-risk individuals.

▼ details

99.3% NPV misleading without context

The 99.3% Negative Predictive Value (NPV) sounds reassuring, but it’s misleading, even for high-risk participants. NPV tells us how well the test rules out cancer when the result is negative, but even in high-risk groups, lung cancer is still relatively uncommon. Most people won’t have cancer, which inflates the NPV. What’s more concerning is the low sensitivity (32% for early-stage cancer1), meaning the test misses many actual cases. In high-risk populations, this low sensitivity is far more problematic than the NPV suggests, but the promotional material doesn’t address this.

Obfuscating (?inflating) sensitivity

Baldwin et al. describe the problem far more eloquently than I do,2 but in short:

  • Only test-positive participants were given follow-up CT scans, while the 90% who tested negative weren’t scanned further. This creates a biased result by inflating the test’s sensitivity.

  • If all participants had received CT scans, the sensitivity would likely have dropped to around 10%, meaning many real cancer cases were missed simply because those participants weren’t followed up.

“Hold up” I hear you say, “10%? Really?”

I have a simple brain, so find it easier to think about this in terms of apples and oranges:

Imagine you have a basket of 100 fruits: 90 apples and 10 oranges. You want to find the oranges (representing lung cancer cases). You design a test that only checks some of the fruit for being oranges, but it only checks a small number. For example, instead of examining all 100 fruits, you only inspect 10 fruits from the basket, and you happen to find 3 oranges.

Now, you advertise that your test has found 30% of the oranges (3 out of 10). Sounds decent, right? But here’s the thing: you didn’t inspect all the fruits, just the ones you chose. If you actually inspected all 100 fruits, you might only find 5 oranges out of 50 fruits checked, so in reality, the true sensitivity of your test (the percentage of oranges you found out of all the oranges) could be as low as 10% (5 out of 50 fruits examined).

In the EarlyCDT study, only test-positive people got follow-up CT scans (the equivalent of inspecting a few selected fruits), which is like picking only certain fruits from the basket to check. If they had scanned everyone (ie inspected all fruits), they would have found more lung cancers. This indicates that the test misses a lot of real cases, making its true sensitivity much lower than reported.

Only test-positive participants got follow-up CT scans, making it look like the test was more effective than it really was. If everyone had been scanned, the true sensitivity would have been much lower.

This also explains why - in fairness - it wouldn’t make any sense to promote this test by quoting the Positive Predictive Value (PPV); all this would do is show that most positives are false alarms.

Misleading claim re late-stage cancer detection

The promotional material makes a big deal about reducing late-stage cancer diagnoses, but this claim is misleading for similar reasons.

Certainly, the study that the promotional material appears to rely on suggests a reduction in late-stage cancers, but this result is skewed by excluding many at-risk participants. Put it another way: the only participants who had a chance to be diagnosed with early-stage cancer were the ones who tested positive and received additional CT scans. The test-negative participants could have had undetected cancers that would eventually progress to late-stage, but since they weren’t screened further, those cases weren’t counted in the study.

By limiting CT scans to the test-positive group, the study artificially reduced the number of late-stage cancers diagnosed in the intervention group, making it look like the EarlyCDT test played a larger role in reducing late-stage diagnoses than it actually did. If all participants had been given CT scans, the reduction in late-stage cancers would likely have been less dramatic, and the true effectiveness of the test would have been clearer.

Of course, this doesn’t end up mentioned in the promotional material either.

False sense of security

90% of participants tested negative and were excluded from further screening, despite their risk. The danger? Missed cancers and delayed diagnoses. The test provides false reassurance without comprehensive follow-up.

The risk here is obvious: cancer could go undetected and progress, resulting in delayed diagnosis and treatment.

The solution, according to the promotional material, appears to be — wait for it — to repeat the test every three months!

Graphic states: How often should I take a test? We believe it's most effective to monitor your health regularly over time. So we recommend taking your test every three months. This is the amount of time it typically takes to see the impact of health and lifestyle changes.

Get tested every three months! (says we)

EarlyCDT vs Standard Methods

The EarlyCDT test isn’t compared to standard screening tools like low-dose CT (LDCT) in the promotional material.

LDCT has been shown to be far more effective in detecting early-stage lung cancers in high-risk groups, yet the promotional material doesn’t mention this. Without comparing the test to LDCT, consumers are left with the impression that this blood test is a standalone solution, when in fact it is much less effective than LDCT when used in isolation.

Next steps

Goodbody Clinic isn’t the only company promoting this test, but they’re one of the few companies bidding for clicks in your search results, every time you search for a “cancer blood test”.

They’ve surely read the technical literature, and will know that the high NPV hides the low sensitivity, and that the selective follow-up in the study inflates the test’s apparent ability to detect early-stage cancers. But they rely on the consumer to not understand this.

Come on folks, do better.


  1. Sullivan FM, Mair FS, Anderson W, Armory P, Briggs A, Chew C, et al. Earlier diagnosis of lung cancer in a randomised trial of an autoantibody blood test followed by imaging. Eur Respir J. 2021 Jan;57(1). ↩︎ ↩︎

  2. Baldwin DR, Callister ME, Crosbie PA, O’Dowd EL, Rintoul RC, Robbins HA, et al. Biomarkers in lung cancer screening: the importance of study design. Eur Respir J. 2021 Jan;57(1):2004367. ↩︎