We hear a lot about genetic testing being better than ever before. But how can we be sure genetic tests are good enough? Are we able to assess how accurate a test is? How often the test misses things (false negatives) or finds things that aren’t really there (false positives)? And are we able to compare one test against another, so we can decide which test to have?
Sequence quality is crucial
To answer these questions we first have to decide what determines how good a genetic test is. One crucial factor is sequence quality. We discussed this in detail in a previous blog. TGMI recently presented the Quality Sequencing Minimum (QSM) to improve sequence quality. The QSM is a transparent standard which allows test providers to evaluate and communicate the quality of their tests. We are very pleased that the QSM paper is driving interest and renewed engagement in this important area.
Finding genetic variants is an essential requirement
Finding genetic variants is the most important responsibility of a genetic test. How well a test does this is called the analytical sensitivity. There is a simple way to determine analytical sensitivity. You see how well the test finds genetic variants that are already known to be real. The variants used in the assessment should cover all the different types of variants the test needs to find. And the variants should have been confirmed as real by a different method to the one being assessed. As well as giving the analytical sensitivity of your test, this performance review gives valuable information that helps you to improve your test.
We need more variant data to benchmark genetic tests
We urgently need more variant data that we can use to assess the analytical sensitivity of genetic tests. And we need to increase the number and range of variants included in these benchmarking datasets.
Currently, most genetic testing laboratories perform small-scale evaluations using in-house data. These rarely fully cover the full range of variant types a test must detect.Many laboratories also rely on computer simulated variants to assess the performance of their variant detection pipelines. These are useful. In particular they allow the number and range of variants assessed to be much bigger. But they cannot completely replace real data from real people.
False positive results are a problem
A recent study tried to confirm 49 variants reported in direct-to-consumer tests. 40% turned out to be false positives. Most were in cancer predisposition genes. For eight people the direct-to-consumer test indicated that they had a pathogenic, cancer-predisposing BRCA mutation. But the repeat testing showed these were not really there. They were ‘false positives’.
Direct-to-consumer testing companies recommend, usually in very small print, that their results should be confirmed before any medical action is taken. How often does this happen? No one knows, but we do know that it doesn’t always happen. Partly because not all health systems cover the cost of the repeat testing. So, people would have to pay for the confirmatory test themselves, and some cannot afford to. In turn, this makes it likely that some people are acting on genetic test results that are not correct.
False negative results are a problem
Accredited genetic testing laboratories should have checking processes in place that keep false positive results to a minimum. Accredited laboratories also must participate in performance evaluation schemes that are quite good at checking for false positives.
Evaluating how often false negatives occur is more difficult. But it is clear that they do occur. A recent study sent eight BRCA pathogenic variants to 20 laboratories. Only 13 were able to test for all the different variant types. Seven were not able to test for ‘exon CNVs’ which are tricky to detect, but account for 10% of pathogenic BRCA variants. People, not unreasonably, usually assume all BRCA tests cover all the different types of pathogenic variants. So, if they get a negative test result from a lab that doesn’t test for exon CNVs, they may not realise the testing was incomplete. This is one type of false negative result.
Failing to detect a real genetic variant is the most common type of false negative. In the BRCA study, ten labs found all the pathogenic BRCA variants. But three labs missed variants. Four of the eight variants were not detected by all the laboratories. Overall the false negative rate was small. But 50% of cancer-predisposing variants were not detected by all the laboratories. Clearly we need to do better.
The ICR639 CPG NGS validation series
To do better we need to comprehensively evaluate performance using gold-standard benchmarking resources. To help achieve this for cancer predisposition gene testing, TGMI have made the ICR639 CPG NGS validation series available.
The ICR639 series includes data from 639 individuals with pathogenic variants in cancer predisposition genes. We have confirmed all the variants using two completely different methods. So we can be confident they are real, true positives. We have also taken care to have good representation of the most difficult types of variants to detect. You can find out more in the paper we published in Wellcome Open Research as part of the TGMI Gateway.
A benchmarking resource for BRCA testing
The BRCA genes, BRCA1 and BRCA2, are amongst the most widely tested genes in clinical practice today. Hundreds of different providers now offer BRCA testing across the globe. All do the testing in different ways. The differences can be minor or very substantial. Test providers will usually give you basic information about how they generate the DNA sequence. But you can rarely find out how they analyse that sequence to find variants. It is at the analysis stage that most errors occur.
In the ICR639 series we have included 502 pathogenic BRCA variants. We believe it could serve as a gold-standard benchmarking resource used by every laboratory conducting BRCA testing. This would give labs vital information about their false negative rates. It would also give users more information to help them compare BRCA testing providers.
We need to make similar benchmarking resources for other genes. More fundamentally, we need to change the culture of genetic testing, so there is more transparency. We need to demand and deliver better information about how tests are being done. And how good this performance is relative to other labs, and the expected standard for that test. Most importantly we need to communicate this information in ways that are understandable to clinicians and patients.