Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lantz server : centralize instrument control for seamless multiprocess integration #15

Open
MatthieuDartiailh opened this issue May 6, 2015 · 6 comments

Comments

@MatthieuDartiailh
Copy link

The idea was first mentioned in LabPy/lantz_drivers#5 by @alexforencich and find a resonance with private discussions between @hgrecco and myself.
The idea is that it may be desirable to centralize all communication with the instrument using a kind of server which would allow to have multiple-processes use seamlessly the same instrument without possible conflicts or cache corruption.

Some roughs ideas about such a design :

  • when a connection to an instrument is required a keyword could allow to ask to attempt to connect to the central server if it is running. If it is, a Proxy to the instrument is returned.
  • the proxy should be as smart as possible to limit the number of communications with the server. Synchronizing the cache would be best and copying the driver features would work best in that case.
  • whatever we end up with to plug the GUI onto the instrument we should be able to do the same on the proxy.
@crazyfermions
Copy link

In principal, this is a good idea, it gives a lot of flexibility, for example the server can be running on a different machine than the measurement client or as you said, different processes can access the instruments without any problems (e.g. if you want to monitor the temperature of a setup constantly with one program, while also using a PID loop with another one to control it). However: I cannot foresee how much work this would be and how far this would influence code complexity for a nice, but for most users not relevant feature.

So in conclusion: Very nice to have, but is the additional code complexity/work worth it?

@alexforencich
Copy link
Member

Well, that's the main reason I brought it up. I'm not sure how much complexity would be required, and if we can get our heads wrapped around how the implementation might work there might be a relatively straightforward solution that we can implement from the beginning, instead of tacking something on later that isn't integrated.

@hgrecco
Copy link

hgrecco commented May 13, 2015

I think if we provide good introspection capabilites, it could be easily done afterwards.. I think it complicates the design of the core as you will need to define a protocol and keep it in sync between server and clients (which might be in different packages). Puting this in a different package means allowing the protocols to be specific for the actual needs.

@alexforencich
Copy link
Member

The only issue I have with putting it in a separate package is making it seamless. I want to be able to write scripts that will use the arbitration layer when it is present without having to detect if it is loaded or not or detect what wrapper script is installed, etc. I think this is the biggest argument for integrating it. It loses a lot of its utility if it's grafted on and supplied as a separate package with wrappers and what not.

@MatthieuDartiailh
Copy link
Author

I don't see any way to circumvent the fact that you need at some point to start the server and know to which address and port it is bound. But once you ask to use a specific server we can easily alter lightly the core meta classes to return a proxy rather than a real driver. The proxy can be built at run-time by inspecting the driver class and then they should never be an issue with what wrapper exists or not. Having it as an external package mean also that you can choose not to use it if it does fit your need without polluting too much the core.

Furthermore I would really like if we could do more progress on the core and the drivers right now. I need Lantz at work so a little boost in the review process would be welcome.

If I can find some time I will try to start a wiki page in this repo with possible ways to get around the server issue but I can make no promises on this one.

@hgrecco
Copy link

hgrecco commented Jun 1, 2015

I agree with @MatthieuDartiailh on this. Let's try to move forward with the basics and get back to this latter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants