Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alternative gradient non-linearity distortion correction workflow / interface #894

Open
Lestropie opened this issue Oct 22, 2024 · 3 comments

Comments

@Lestropie
Copy link

Lestropie commented Oct 22, 2024

What would you like to see added in this software?

I have a suggestion that might improve both accuracy and execution speed of the gradient non-linearity distortion correction workflow.
It would however change how both users and developers interact with the workflow, and would require additional development effort, so am open to others' opinions.
Relevant to #819, but posting separately here to keep that thread clean.
Explicitly pinging @bpinsard to see if there's interest in modifying that PR.
Applicable to nipreps/smriprep#355 and nipreps/fmriprep#1550 / #819.


The geometric distortions caused by gradient non-linearity are just a non-linear warp field that is static w.r.t. scanner space. While this is concisely parameterised by a vendor-specific gradient coefficients text file, it can be equivalently expressed by a 4D deformation field. Importantly, these distortions are agnostic to any specific image voxel grid; they just need to be resampled onto a destination image voxel grid in order for the resampling to take place.

The gradunwarp package is ultimately doing three separate steps IIUC:

  1. Solving for this continuous deformation field using a sparse lattice of sample points
  2. Resampling this field estimate onto the image voxel grid
  3. Performing a non-linear deformation of image data based on this field

Because the computations are a little expensive, step 1 is by default done on a much lower resolution lattice (10mm spacing) than typical image acquisitions. This may cause inaccuracies in distortion correction with respect to the equivalent correction as performed by the vendor software. However if a dataset has multiple input images that have come from the same scanner (whether within or across sessions), the calculations in step 1 are duplicated.
Further, step 3 appears to be comparatively slow due to implementation (maybe just lack of multi-threading).

For my own current experimentation (which is specifically shoving gradunwarp into its own App for our local pipeline in the absence of #355), I came up with an alternative workflow:

  1. Generate an identity deformation field
    (Every voxel maps to its own centre)

  2. Run gradunwarp:

    1. Using a more dense sampling lattice for accuracy
    2. Taking each of the three volumes of the identity deformation field as input
      (which is itself of a high spatial resolution, again for accuracy)
    3. Not performing Jacobian modulation

    This yields a single high-quality deformation field estimate.
    This process only needs to be run once per scanner model.

  3. This field can then very efficiently be:

    1. Applied in isolation to any input image using existing software tools
    2. Composed with any other transformation(s)

I'm currently using a [-300mm, 300mm] FoV (default), 300 sample points (likely overkill; default is 60), sampled on a 2mm voxel grid (matches density of sample points; image size 100MB; maybe overkill also). That took me about 2 hours to compute (this could likely be cut by 67% by modifying gradunwarp). But once it's computed, to then apply the correction to an input T1w image takes 15 seconds using MRtrix3 (most of which is preloading the warp image). And this can be done for any image using the same scanner model, regardless of whether it's the same participant, same session, or even the same scanner model in two different installation locations.


My key question here therefore is: which of these implementations is best?

  1. The current proposal in ENH: add gradunwarp base workflow for f/smriprep.  #819:
    gradunwarp itself gets incorporated into the workflow.
    Any pipeline needs to provide the gradient coefficients file and indicate the scanner vendor so that gradunwarp can be executed.
    Additional options regarding distortion field estimation may also need to be exposed.
    Deformation field is computed multiple times; worst case scenario once per image, better would be once per scanning session.

  2. An alternative workflow that instead takes as input a pre-estimated deformation field:
    Anyone can pre-estimate the field from the vendor-provided coefficients file using gradunwarp or any other software tool.
    Applying correction to input images, or composing with other transformations, is very fast.
    The storage size of the gradient non-linearity information is however much larger.

  3. A potential hybrid:
    Requisite input in order for the workflow to be applicable could be either a gradient coefficients file or a pre-calculated deformation field.
    Doesn't solve the interface complexity problem of point 1.
    But would be better for expert users who can pre-calculate those fields and provide them as input to avoid redundant calculations.

If these alternatives would add too much unwanted complexity or effort, then it's all good. Just offering my logic here in case maintainers think it's worth considering.

Do you have any interest in helping implement the feature?

Yes

Additional information / screenshots

No response

@bpinsard
Copy link
Contributor

Thanks for the very detailed description of the different workflows alternatives, and also for reviving this feature that I had not much time to work on.
While working on that feature, I have been pointed to recently source-released Tortoise https://github.com/QMICodeBase/TORTOISEV4/tree/main/src/tools/gradnonlin as an alternative tool for gradient non-lin correction / displacement field generation, we should check if the control of warp resolution is flexible.

The idea of taking a scanner specific displacement field instead of coeff files as input is a great idea and might simplify some of the workflow, and:

  • for some manufacturer, the gradient coil specific harmonics file cannot be openly shared, but the resulting displacement field could maybe be shared?
  • if these are shareable, we could build a datalad repo of these pre-computed warps.
  • we could then use BIDS tags to automatically map ManufacturerModelName (or dicoms 0x0021,0x1033 for Siemens) to the gradient coil model (eg Prisma/PrismaFit -> AS82), this will be useful for multi-site datasets.

@Lestropie
Copy link
Author

I'm currently sceptical about the prospect of having a public repository of gradient non-linearity fields. I suspect that vendors would wish for the deformation fields to not be shared for the same reason as they currently require that the coefficient files remain private. I am however starting this dialogue with my local contact, we'll see what happens.

@Lestropie
Copy link
Author

It has been made clear to me by my local vendor contact that the representation of hardware gradient non-linearity information as a deformation field rather than the native coefficient text files does not obviate the private nature of those information, and anyone making such a representation publicly visible would almost certainly be in breach of usage agreements.

I must strongly assert that regardless of the outcome of this discussion, nobody should make an attempt to create a public database of scanner gradient non-linearities, no matter what form those data may take.

We can nevertheless discuss how one might make use of such data in those instances where such data have been made privately available to them:

  1. Input specificity:
    Providing as input to specifically a BIDS App an individual file is somewhat error-prone. The BIDS Apps interface involves specifying an input dataset, which may contain multiple participants, and the default behaviour is to process all participants sequentially. There is no guarantee that all participants in a dataset will have been acquired using the same scanner hardware. Therefore, while an individual workflow can take as input a coefficients file, the process of constructing that workflow needs to contain an outer layer of logic that looks at ManufacturersModelName and matches that to the corresponding data within a user-specified input directory.

  2. Input format:
    Providing this information as deformation fields rather than coefficient files could have made a big difference if that transition could have yielded a public dataset. In the absence of that, the argument is not as strong. It is true that pre-computing the deformation field per scanner allows for estimating a high-quality field once and then quickly applying it to many datasets, but it's not a paradigm shift. So I'd support proceeding with workflows that take a coefficient file as input, as is the conventional usage. Checking the input directory for a pre-computed deformation field before checking for a coefficients file could be proposed as a future augmentation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants