-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
remove axis restrictions #235
base: main
Are you sure you want to change the base?
Conversation
Automated Review URLs |
This pull request has been mentioned on Image.sc Forum. There might be relevant details there: https://forum.image.sc/t/ome-ngff-update-postponing-transforms-previously-v0-5/95617/3 |
Thanks for opening the PR @d-v-b! 🙏 but btw it seems you missed a line further down, line 313 in the current file:
|
Thanks @jni, I think I got all the references to the 2-5D limit. Please let me know if I missed any. One consequence of this change is that 1D data can now be stored in OME-NGFF. Personally, I think this is great -- 1D data is real data, and people should be able to store it if they have it. |
This PR should update the schema and examples before being merged. Edit: after a little more thought, I'm hopeful the schema changes needed here will be small; but certainly some examples that are currently disallowed that we want to allow would be helpful. |
switching this to a draft while I work on getting the schema documents consistent with the spec. Because manually editing JSON schema documents is tedious and error prone, I am going to generate the schema with some python scripts containing pydantic models. Personally I think these models should be part of this repo, because pydantic is a good tool for modelling JSON schema (much better than doing it manually), but if this is unconvincing I can remove the python files from the final PR. |
@d-v-b wouldn't a linkML version of your pydantic classes be of more generic use? ---
id: https://ngff.openmicroscopy.org/latest/schemas/image.schema
name: ngff-image
title: OpenMicroscopy New Generation File Formats Image Schema
description: |-
TODO
version: 0.1
license: ??
prefixes:
linkml: https://w3id.org/linkml/
biolink: https://w3id.org/biolink/
schema: http://schema.org/
ome: https://www.openmicroscopy.org/Schemas/Documentation/Generated/OME-2016-06/ome.html#
ORCID: https://orcid.org/
wiki: https://en.wikipedia.org/wiki/
classes:
Axis:
attributes:
name:
required: true
type:
unit:
ScaleTransform:
attributes:
type:
# equals_string: "scale" (set this as a rule?)
scale:
range: float
array:
maximum_number_dimensions: 1
dimensions:
- minimum_caridnality: 1
TranslationTransform:
attributes:
type:
# equals_string: "translation" (set this as a rule?)
scale:
range: float
array:
maximum_number_dimensions: 1
dimensions:
- minimum_caridnality: 1
... |
@glyg perhaps it would, but the goal here is just to generate JSON schema documents, so I'm not sure what generic use we need to accommodate? |
@d-v-b — I surely don't have a broad enough view of the whole project, so I might very well be mistaken
I was thinking about consumers of the schema or of a For example a third party library wanting to parse the |
@glyg in terms of scope, currently this repo contains JSON schema documents that can fetched from github and used for validation. I don't think there's any expectation that software libraries import code artifacts by this repo. That could of course change, but I don't know of efforts in that direction. I am proposing changes to the spec, and so I need to update the schema documents. Because the current JSON schema documents are manually written, they contain mistakes and are a pain to update after making spec changes. Since this project is already using python as a dependency, as a quality of life change I am proposing to use pydantic to define data models that serialize to JSON schema, as a way to avoid needing to write the schema documents by hand. I could be wrong, but I suspect writing data models in python and serializing those models to JSON schema will be an easier development experience than writing data models in JSON schema directly. Maybe linkml could also work for this purpose, but I don't know how to use linkml, and I do know how to use pydantic, so for me the choice is simple. |
Yes I understand your point, I think a linkml version would bring some value but as you said you are the one doing the work 🙂 Thank you for taking the time to answer me |
tests are passing, so i think this is ready for review. |
latest/models/image.py
Outdated
name: Optional[str] = None | ||
datasets: conlist(Dataset, min_length=1) | ||
axes: UniqueList[Axis] | ||
coordinateTransformations: Optional[tuple[ScaleTransform] | tuple[ScaleTransform, TranslationTransform]] = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is a pydantic tuple equivalent to list in JSON?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
JSON arrays are equivalent to python lists, but the spec defines that coordinateTransformations is typed collection with fixed length, so on the python side it's a union of tuples.
I added a section to provide some guidance for partial implementations, i.e. software that does not implement the full spec; namely, the spec now suggests that partial implementations which normalize input data to their supported subset of the spec notify users when this is occurring. |
imho this recommendation is orthogonal to the main purpose of this PR, and it should come in a separate PR. I like it, but it's an extra thing and it's hard enough to merge PRs that are small and self-contained. |
Independently of whether one PR or two, I can certainly see the implementor community wanting clarification in/around RFC-3 about the responsibility placed on them with this restriction dropped. |
Because this PR is widening the space of ome-ngff data, it seems reasonable to give at least a suggestion for how implementations should handle this change. We cannot expect that all implementations support N-dimensional data. I think the best we can do is suggest that implementations keep users aware of how their data is being cast / coerced / transformed, when that kind of thing is happening. Thus it's very non-orthogonal to this PR. |
It's orthogonal in the sense that partial implementations were a thing before this PR. |
latest/schemas/image.schema
Outdated
], | ||
"title": "NGFF Image", | ||
"type": "object", | ||
"$schema": "https://json-schema.org/draft/2020-12/schema", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@d-v-b: did this get run through a prettifier? What's the minimal diff?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I generated these schemas with pydantic models (deleted in this commit), What's the particular issue here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@d-v-b The textual diff is enormous/all-encompassing so it's hard for a reviewer to tell what's changed.
Potential suggestion: make the equivalent pydantic model for the previous spec, generate it (and make sure that it's not semantically different from the main branch), then update it with these changes, then we should be able to see the minimal diff in that commit.
Of course, if there are semantic differences, it might be worth (a) checking whether they are real or bugs, and (b) making a separate PR to update them, if needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gotcha, I will see what I can do to clean up the diff
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and while we're on the subject, what do we think about renaming the schema files to end with .schema.json
instead of .schema
, since they are json?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, we actually weren't on the subject. 😂 My suggestion is to raise that in a different issue/PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I already opened one a while ago, but I figured it was worth a shot to gin up some support here :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
diff should be much cleaner now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @d-v-b. 👍 for ginned up clean ups post-RFC-3.
do we still want to require that the axis names are unique? |
another question: should the |
In concrete terms, I would propose that we make |
Those last questions, @d-v-b, are RFC-3 related or more ome-zarr-models-py cleanups? |
I'm trying to answer the question "how should axes work after we remove the existing restrictions", so I think it's a bit of both :) |
I would err on maintaining whatever it is now and updating it in a separate RFC if needed. Unlike RFC-3, making something previously optional now required is backwards-incompatible: a v0.4 file would no longer be valid v0.5 if it omitted the optional types. |
That's fine with me, but why is backwards compatibility a concern here? A v0.4 file will not be valid in 0.5 for different reasons, no? |
We do have a lot of OME-Zarr v0.5 data from the ome2024-ngff-challenge that was generated by copying the metadata from v0.4 data without applying any changes. So, a lot of that could be rendered invalid if restrictions were added to the v0.5 spec. But just to clarify... Are we talking about v0.4 -> v0.5 with this spec change? As I understood it, v0.5 release is soon, and is really just the Zarr v2 -> zarr v3 update, although I guess removing restrictions has very small impact and could be included? |
Setting the missing keys to
This is the key question -- 0.5 is basically frozen, so I thought these changes would be in 0.6? But I'm not really sure why we are worried about backwards compatibility? |
Looking at the diff, there's a line in |
as of 0.4 both |
That's a good point, I think I should pull in the latest changes to 0.5 and build on that |
+1 for a rebase on top of 0.5 or rather on top of #282 (since there's currently no latest) and keeping it as minimal as possible for the time being, so we can get RFC-3 completed. |
Axes can be N-dimensional, of any type, in any order.