-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Alternative gradient non-linearity distortion correction workflow / interface #894
Comments
Thanks for the very detailed description of the different workflows alternatives, and also for reviving this feature that I had not much time to work on. The idea of taking a scanner specific displacement field instead of coeff files as input is a great idea and might simplify some of the workflow, and:
|
I'm currently sceptical about the prospect of having a public repository of gradient non-linearity fields. I suspect that vendors would wish for the deformation fields to not be shared for the same reason as they currently require that the coefficient files remain private. I am however starting this dialogue with my local contact, we'll see what happens. |
It has been made clear to me by my local vendor contact that the representation of hardware gradient non-linearity information as a deformation field rather than the native coefficient text files does not obviate the private nature of those information, and anyone making such a representation publicly visible would almost certainly be in breach of usage agreements. I must strongly assert that regardless of the outcome of this discussion, nobody should make an attempt to create a public database of scanner gradient non-linearities, no matter what form those data may take. We can nevertheless discuss how one might make use of such data in those instances where such data have been made privately available to them:
|
What would you like to see added in this software?
I have a suggestion that might improve both accuracy and execution speed of the gradient non-linearity distortion correction workflow.
It would however change how both users and developers interact with the workflow, and would require additional development effort, so am open to others' opinions.
Relevant to #819, but posting separately here to keep that thread clean.
Explicitly pinging @bpinsard to see if there's interest in modifying that PR.
Applicable to nipreps/smriprep#355 and nipreps/fmriprep#1550 / #819.
The geometric distortions caused by gradient non-linearity are just a non-linear warp field that is static w.r.t. scanner space. While this is concisely parameterised by a vendor-specific gradient coefficients text file, it can be equivalently expressed by a 4D deformation field. Importantly, these distortions are agnostic to any specific image voxel grid; they just need to be resampled onto a destination image voxel grid in order for the resampling to take place.
The
gradunwarp
package is ultimately doing three separate steps IIUC:Because the computations are a little expensive, step 1 is by default done on a much lower resolution lattice (10mm spacing) than typical image acquisitions. This may cause inaccuracies in distortion correction with respect to the equivalent correction as performed by the vendor software. However if a dataset has multiple input images that have come from the same scanner (whether within or across sessions), the calculations in step 1 are duplicated.
Further, step 3 appears to be comparatively slow due to implementation (maybe just lack of multi-threading).
For my own current experimentation (which is specifically shoving
gradunwarp
into its own App for our local pipeline in the absence of #355), I came up with an alternative workflow:Generate an identity deformation field
(Every voxel maps to its own centre)
Run
gradunwarp
:(which is itself of a high spatial resolution, again for accuracy)
This yields a single high-quality deformation field estimate.
This process only needs to be run once per scanner model.
This field can then very efficiently be:
I'm currently using a [-300mm, 300mm] FoV (default), 300 sample points (likely overkill; default is 60), sampled on a 2mm voxel grid (matches density of sample points; image size 100MB; maybe overkill also). That took me about 2 hours to compute (this could likely be cut by 67% by modifying
gradunwarp
). But once it's computed, to then apply the correction to an input T1w image takes 15 seconds using MRtrix3 (most of which is preloading the warp image). And this can be done for any image using the same scanner model, regardless of whether it's the same participant, same session, or even the same scanner model in two different installation locations.My key question here therefore is: which of these implementations is best?
The current proposal in ENH: add gradunwarp base workflow for f/smriprep. #819:
gradunwarp
itself gets incorporated into the workflow.Any pipeline needs to provide the gradient coefficients file and indicate the scanner vendor so that
gradunwarp
can be executed.Additional options regarding distortion field estimation may also need to be exposed.
Deformation field is computed multiple times; worst case scenario once per image, better would be once per scanning session.
An alternative workflow that instead takes as input a pre-estimated deformation field:
Anyone can pre-estimate the field from the vendor-provided coefficients file using
gradunwarp
or any other software tool.Applying correction to input images, or composing with other transformations, is very fast.
The storage size of the gradient non-linearity information is however much larger.
A potential hybrid:
Requisite input in order for the workflow to be applicable could be either a gradient coefficients file or a pre-calculated deformation field.
Doesn't solve the interface complexity problem of point 1.
But would be better for expert users who can pre-calculate those fields and provide them as input to avoid redundant calculations.
If these alternatives would add too much unwanted complexity or effort, then it's all good. Just offering my logic here in case maintainers think it's worth considering.
Do you have any interest in helping implement the feature?
Yes
Additional information / screenshots
No response
The text was updated successfully, but these errors were encountered: