-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do we need axis type? #215
Comments
In principle this sounds like a good idea (reduce redundancy etc) but just a couple of immediate thoughts... How would you specify the "channel" axis with units? "If axes type can be any string, it's not clear what applications are expected to do with it". I'm not sure that getting rid of The spec says "The [unit] value SHOULD be one of the following strings, which are valid units according to UDUNITS-2". So while that does allow other units, e.g. "m/s ^ 2", I wouldn't expect an application to know what to do with that. |
For light microscopy, you could use SI units for energy or wavelength. But choosing the correct unit here is also problem for the current version of the spec. I think this reveals that the implicit model of a "channel" axis doesn't fit into the same model used for "space" or "time" axes. The intended semantics of a "channel" axis should probably be stated in the spec, because I don't really understand how it is supposed to work.
If this is true, then we should add text to the spec that explains when it is useful to know the type of an axis. I'm not familiar with image processing tools that require this kind of information, so more detail here would be very helpful.
This is a good point. Because the spec does not require that the |
@d-v-b this exactly contradicts your earlier point, that you can infer the type from the units — wavelength is measured in the same units as spatial extent. There's other examples — such as shear (commonly expressed in 1/s) and frequency (Hz or 1/s). |
@jni I am making two points here. The first point is that, specifically for spatial and temporal axes, the axis type and the units seem to be tightly coupled. Can you clarify if you think the spec should allow The second point is that "channel" doesn't fit into the same category as "time" and "space".
The spec tries to squeeze spatiotemporal dimensions and categorical dimensions into the same metadata, and I don't think it works. We would probably benefit from more formally distinguishing the two. |
I agree that "channel" doesn't fit into the same category as "time" and "space", but what is the best way to deal with that? I can think of various examples of other axes that are like categories, but how should we distinguish them? Should the images from different categories be stored as separate arrays? (e.g. in a Plate each Well & Field is a separate image). Sometimes the distinction is hard to define. If we do allow "category" axes alongside other space and time axes in a N-D array (I think we should), then we need some way to know they are different:
so you need to know if the I'm not sure if the main issue of this discussion is solving the |
☝️
I agree with the implication that this is an icky restriction.
indeed. We could have
Channel is one example. Stage position is another. I'm sure if we canvassed all bunch of microscopists (not just optical microscopy either), we can come up with more examples where we want a type to distinguish between axes that have the same units. |
I don't know the best way to deal with this, but at a minimum if we go along with the idea that "channel" is actually a categorical axis, then we need a place to write down the categories, and the simplest thing looks approximately like the example @jni gave, where we include an array of values that give meaning to each element along the categorical axis. Your example of the FRAP experiment is useful. I think converting
@jni I don't know what your this emoji means here 😆 there are only 2 solutions to that problem that I can think of: either allow
First, I'm confused about stage position here... wouldn't that be measured in a unit of length? I think I'm not getting it. As for distinguishing between axes, isn't that what the |
it means that whichever decision we take, it is an easy problem to solve.
Yes, but we want to treat it differently from axes of type "space".
You could encode it in the name, but I personally find that clunkier. I'd prefer to have the flexibility to name my axes whatever and encode what they are in the type. Do we really want to restrict "space" axes to "xyz"? In the context of transforms, I might want instead to call them "x0", "x1", "x2" and reserve xyz for the output space? 🤷 In short, the name "name" usually refers to "arbitrary ID", so I think it's nice to not attach additional baggage to that. It's part of the motivation for RFC-3 from my perspective. |
Obviously this kind of stuff can come in a later RFC, either tied to transforms (which iirc included displacement/position fields anyway) or separate from them. But you might again want to have a "type" to distinguish the different ordinal/categorical/discrete/positional axes. |
In the stage position example, what would be the axis type?
And which decision do you think we should take? I don't think anyone has actually answered this question yet. |
"position" or "stage-position" are two examples.
I don't have a strong opinion about it. |
Let me see if I understand the stage position example correctly: you are envisioning that someone acquires a 2D image, moves the stage (lets say in 2 dimensions), and then they acquire another image, and so on, and these images are stored concatenated along an axis of an array, resulting in an array with shape And are there some actual examples of data acquired like this? How do people use the "type" field for these kinds of datasets today? What scale and translation transformations do they use? I think it would really help to know more about this use case. |
I mean, trivially, you can do [sy, sx, y, x] for a rectangular tiled acquisition. Then the step size along sy and sx will be in length units, as well as y and x. Yes, eventually you are going to stitch this somehow, but you probably want to store this array anyway. Especially if your lab works on stitching algorithms. 😂
This figure is from the libczi docs, linked from RFC-3:
They don't: the spec forbids it. That's a huge part of the motivation for rfc-3.
None currently. For rectangular tiled acquisitions you could have consistent metadata for that, though. For non-rectangular data you can have pointwise positions (transformations spec). |
Thanks, this is super helpful, I think I understand now. As @will-moore noted earlier, I think the HCS layout addresses the same problem, but for HCS datasets instead of CZI montages? Another relevant use case might be ML training datasets, where you have a bunch of 2D images of dogs (for example) padded to the same 2D dimensions and stacked along the last axis, resulting in axes An unwritten assumption in the existing @jni correct me if I'm wrong, but I'm pretty sure it would make no sense to downsample an image along "stage coordinate" axes, because you would be blurring array values across a discontinuity. Likewise, it would make no sense to downsample the ML training dataset along the "dog axis", for the same reason. So I see two options for this kind of thing: The second type of axis has the following properties:
In fact, this second type of axis is basically functioning as an arbitrary collection. So... what if we just use a spec for collections of images? Option 2: we use a higher-order "collection" data structure to represent collections. Personally I think this is a much better solution to this problem, and it is actually what OME-NGFF already does with the HCS layout. @jni can you explain why your use case couldn't be represented as a collection of separate images? To me that seems much simpler in terms of metadata -- each image just gets its own vanilla spatial metadata -- and it's also how the spec works today (see HCS). |
If we stick with option 1 (the status quo) I still think we can make improvements to the spec. Here are some concrete suggestions, that hopefully don't bleed too much into the transformations discussion {
"axes": [
{"name": "x", "type": "dimension", "info": {"unit": "m"}},
{"name": "ch", "type": "category", "info": {"description": "filter wheel goes brr", "values": ["488", "561", "594"]}}
]
} "type" is now a discriminator for two structurally distinct types of axes, "dimension" and "category". Because the length of the categorical axis is fixed by this group-level metadata, we would have to expressly forbid downsampling a categorical axis (which is probably fine -- does anyone ever downsample their channel axis? this probably breaks viewers anyway). An alternative version, which starts to look like a bizarro xarray: {
"dims": ["x", "ch"],
"coords": {
"x": {"type": "dimension", "info": {"unit": "m"}},
"ch": {"type": "category", "info": {"description": "filter wheel goes brr", "values": ["488", "561", "594"]}}
} |
I think the opposite of "categor[ical]" is "continuous", not "dimension". In a variety of scenarios, not least of which is acquisition, having a contiguous array of data is advantageous, sometimes for performance, sometimes for programmatic convenience, and sometimes both. I'm certainly not super-excited about having to do array unstacking/stacking just to save/read an RGB image. So I'm pro-option-1. I'm not in love with your proposed improvements. This is only for vague reasons that I can't articulate at 1am, but at any rate I think they muddy the waters re RFC-3. I would rather take a smaller step there (stick with "type": "space" for now) and mess with the type keyword (and the overall structure of axes!) in subsequent RFCs. |
If you want your data to be contiguous, then a chunked format like zarr might not be the right substrate 😉 Snark aside, I am talking about a file format, not an in-memory representation / API. I think you are conflating the two. Just because two arrays are in separate OME-NGFF zarr groups doesn't mean users (or a nice library) can't concatenate them in memory. So if we just focus on the file format side, I really think it would be helpful here if you could articulate how your proposed use case is fundamentally different from the HCS use case, or the broader conversation about image collections, because it really looks like the same problem to me. |
In the latest version of
axes
metadata, we allow specifying both a type for an axis (e.g., "space", "time", "channel"), as well as a unit. It seems to me that the unit of an axis unambiguously determines its type -- ifunit
is set tometer
, then the axis type must bespace
, and if theunit
is set tosecond
, then the axis type must betime
. So what value does the axis type field add?On the other hand, as noted in #91, the spec allows setting an axis type that is incompatible with the units:
{"name": "oops", "unit": "second", "type": "channel"}
is valid according to the spec, even though the axis type and the units are inconsistent. Allowing an inconsistency like this is a weakness of the spec that should be remedied. As suggested in #91, we could simply add text stating that the units and the axis type must be consistent, but in that case it's not clear what value the axis type adds, if it is constrained by the units.Another problem with the axis
type
field is that its domain is not well defined. If I have a 2D image sampled from phase space of a physical dynamical system, then the axes might be velocity and momentum, in which case the units are simple to express (m/s and m/s ^ 2), but the axes are neither merely space nor merely time but a mixture of space and time. Is "space + time" a valid axis type? Or should it be "space / time"? Image data from relativistic simulations might also run into this issue. If axistype
can be any string, it's not clear what applications are expected to do with it.Based on my experience and the current text of the spec, I cannot see the purpose of the axis
type
, given that we can express more information with axisunit
. I would appreciate other perspectives on this issue -- either we should remove axis type in the latest version (my preferred option), or we should add language clarifying what axistype
is for, and why axisunit
alone cannot achieve that purpose.The text was updated successfully, but these errors were encountered: