Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

closed loop feedback? #8

Open
tractatus opened this issue Jun 24, 2021 · 4 comments
Open

closed loop feedback? #8

tractatus opened this issue Jun 24, 2021 · 4 comments

Comments

@tractatus
Copy link

Hi @tlambert03

I really like the idea of useq-schema!

Over the last couple of days I've been working on putting together a basic C++ library that uses a combination of:

  • MMCore I/O handling.
  • OpenCV basic image processing.
  • TensorRT inference on NVIDIA GPUs and deep learning accelerators.
  • Imgui bloat-free Graphical User interface for C++ with minimal dependencies.
  • Polyscope viewer for 3D data like meshes and point clouds extracted by OpenCV and TensorRT.

I primarily work in in situ sequencing method development (https://www.biorxiv.org/content/10.1101/722819v2).

One of the things we will see this year in in situ sequencing is an explosion of different commercial vendors offering "hardware boxes" that are essentially just a box with automated fluidics and an epifluorescent microscope to perform some iterative smFISH.

My idea with embarking on the above C++ library is rather that any microscope and hardware can be turned into a sequencing machine operating in a similar fashion to the Illumina Local Run Manager (keeping similar file conventions, user interface etc.).

I really want to use useq-schema to specify and make acquisition runs reproducible. Before embarking on this and digging deeper into the useq-schema I had one question:

Question: Do you have any current plan on how closed-loop events could be handled within the useq-schema. In situ sequencing is full of closed-loop events. Let me give you an example:

Closed-loop histogram equalization: Traditionally each nucleotide can be represented by a single non-overlapping fluorophore. For example A, T, C, G could be represented by Alexa488, Alexa555, Alexa594, Alexa647. Before each sequencing cycle is started the microscope first goes to a region not imaged and takes some snapshots with a given exposure time and light power. The images from these snapshots are then used to compute histograms and extract statistics that are then used to inform the exposure time and light power etc. of each channel setting such that the histograms of each channel are as closely equalized as possible before starting the true acquisition.

There are tons of similar events like this (checking that fluorophores have cleaved before starting the next cycle, adjusting laser power across z-axis etc.).

Do you have any good idea if events like this should already be represented at the level of useq-schema?

@tlambert03
Copy link
Member

Hi @tractatus, thanks for your note.

I agree completely, closed loop sequence modification is super important, and definitely something that I would want to do in my own stuff as well. Short answer is: it's still an open question, and I'd love to hear your thoughts. here are some of my first thoughts

as i'm sure you've seen,useq-schema is kind of playing with two main objects: useq.MDAEvent, which contains the single-timepoint instructions, and useq.MDASequence, which is a more amorphous object that ultimately boils down to a iterable of useq.MDAEvent objects. my immediate thoughts are that MDAEvent – the thing the actual acquisition engine (be it mmcore, or whatever) will "consume" – probably shouldn't have any representation of dynamic properties (i.e. "set the exposure so as to satisfy these criteria"). I can, however, imagine MDASequence acquiring some closed-loop/dynamic instructions... we'd just need to think about how one would represent the parameters of something like histogram equalization in a declarative format such that it could go into a YAML file.

One idea: I would like to see MDASequence turned into a bit more of an abstract iterable-of-iterables. Currently, MDASequence has those four hard-coded TCZP attribute, and an axis order. Each of the time, channel, z, attributes though is really just an iterable for that specific axis. So MDASequene could/should look a bit more like

class MDASequence
    axis_order: Sequence[str]
    axes: Dict[str, Iterable]

The closed loop logic then would need to be captured in the individual axis Iterable. For instance, take the time axis, which is currently captured in the MDASequence.time_plan attribute here. TimePlans are all ultimately just iterables of floats, where the float represents the time that the instrument should wait before starting the next MDAEvent. If one wanted to create a dynamic time plan (for instance, change the inter-frame interval depending on some event in the experiment, like mitosis or something), then we'd need to

A) have some additional stateful logic in TimePlan.__iter__ that is capable of changing the yielded time-delta based on input
B) actually provide that input (in the form of the previous frame or something) to the TimePlan

part B there is more up to the actual acquisition engine. pycro-manager, for instance, has the concept of acquisition hooks that would perform the actual logic of updating the event sequence. So I guess the challenge for useq-schema would be to come up with a declarative nomenclature that captures the "intention" of something like a hook, while the actual implementation is left to the engine consuming the MDASequence iterable...

anyway, that's obviously not a complete answer. but I definitely agree that it's important ... and it would be nifty if we could capture the concept of dynamic closed loop logic in a declarative schema, without being totally dependent on the actual engine performing the acquisition. I very much welcome your thoughts!

btw... I worked with Je a little bit when he was here at HMS, say hi! :)

@tlambert03
Copy link
Member

tlambert03 commented Jun 24, 2021

one more side note...
for me, this is all very much still work in progress... but if you'd like to see how I've been using useq-schema along the lines of what you're also doing:

  1. pymmcore-remote is my wrapper around pymmcore, it adds a couple conveniences, and has a run_mda method that actually consumes the useq.MDASequence object and drives the hardware.
  2. napari-micromanager is a gui interface for pymmcore-remote, that builds the MDASequence based on user input, and passes it to pymmcore-remote to run it. It then receives the frames as they are generated and displays them in napari
  3. image processing would then happen on the napari layer object...

@tractatus
Copy link
Author

Wow yeah sounds that we are very much on the same page then. I will directly go in and checkout your run_mda method and simply translate it to C++ and start from there and as the process gets more mature the full aspects of closed-loop probably have been planned out.

 my immediate thoughts are that MDAEvent – the thing the actual acquisition engine (be it mmcore, or whatever) will "consume" – probably shouldn't have any representation of dynamic properties (i.e. "set the exposure so as to satisfy these criteria").

I agree with this. The specifics of the dynamic properties should probably not be contained in the schema representation but rather just a human-readable way of quickly identifying the loops that exist and then further information is specified in the class (or even compiled program) that is the controller.

Im aware of the napari-micromanager and before embarking on this I quickly went through both pymmcore, pycromanageretc. to see what was available. But in my case Im having increasing issues with deploying inference of trained networks in a performant way and the solution really is TensorRT and then Im back at C++ so I figured I just make a minimal viable solution in C++ and then start thinking about hooks and other things to napari.

@tlambert03
Copy link
Member

But in my case Im having increasing issues with deploying inference of trained networks in a performant way and the solution really is TensorRT and then Im back at C++ so I figured I just make a minimal viable solution in C++ and then start thinking about hooks and other things to napari.

awesome. makes sense.

I will directly go in and checkout your run_mda method and simply translate it to C++ and start from there

I would also point you to the two other existing micromanager "engines" that I know about:

  • the clojure engine built into micro-manager. The main "sequence-consuming" function is run-acquisition ... and the main "event-consuming" function is make-event-fns
  • The java-based acquisition engine AcqEngJ that was written (as I understand it) specifically to consume the event dicts produced by pycro-manager, where submitEventIterator is the thing that receives the sequence (the event iterator) and executeAcquisitionEvent is the thing that drives the hardware for each event object (and, it also calls the hooks that would support this sort of closed loop use case)

I'll also cc @nicost ... who I believe has also toyed with the idea of writing a core acquisition engine in C/C++ (which would be fantastic!). honestly, if you write an mmcore acquisition engine in C++, I suspect others might want to use it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants