Revise ancestor generation algorithm to improve performance #1012
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
A major limitation of tsinfer 0.4 has been that ancestors with high frequency focal sites are excessively long. This reduces parallelism and wastes computation time in ancestor matching. We've explored a number of solutions (e.g. #911), but we've found a simple one that seems to work really well.
It's easiest to explain with an example:

Constructing an ancestor with focal site 5 and moving to the left, we start with a sample set of C-F. Since the focal AC is 4, we can only use sites with an AC of 5 or higher as inference sites. In the current implementation, we would ignore sites 1-4 entirely, but we are then missing an informative signal. Since B also carries the derived allele at site 3, and is the only other non-carrier at site 2, it seems that C has recombined into a clade with B and should be excluded from the sample set.
In the new approach, we still only insert a mutation into the ancestral haplotype if it is older than the focal site, but we use sites of all frequencies for determining when to exclude samples from the sample set. However, we only count conflicts at sites where there are carriers outside of the sample set (i.e. the derived AC in the full population > AC in the sample set).
Initial validation of the new approach looks promising. I ran a small out-of-Africa sim of 1mbp with 200 samples. Using the old validation code, I can match the inferred ancestors to simulated ones and compute
max(true_left - inferred_left, inferred_right - true_right)
, which I call the max overshoot. While the old algorithm increasingly overestimates the length of the ancestors as the focal frequency increases, the new algorithm does not. We don't seem to make many ancestors shorter than their true length either.I've done some testing on larger simulations with similar results, but there is still a lot more validation to do, e.g. with mispolarised alleles and sequencing errors. I did run @hyanwong's code for analysing sample-to-root edges from #903, and in simulations we don't seem to see any change. However, there are a lot more sample-to-root edges in real data with the new method, which is fixed by using a small mismatch ratio for sample matching.
The new approach does require more work from the CPU for ancestor generation, but it seems negligible in comparison to the gains in parallelism and speed of ancestor matching. Comparing the methods on chr20q of the 1000 Genomes Project, we see that fewer ancestor groups are needed and there is a 5X decrease in CPU time needed for matching. The decrease in wall time is more modest: it seems that we aren't using the 126 threads of this EPYC system as effectively in the new approach.
I've implemented the new algorithm in C and Python: the biggest change required is that the derived allele count of a site needs to be stored in the ancestor builder.