Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LECO communication protocol #80

Open
BenediktBurger opened this issue Jan 30, 2024 · 7 comments
Open

LECO communication protocol #80

BenediktBurger opened this issue Jan 30, 2024 · 7 comments

Comments

@BenediktBurger
Copy link

BenediktBurger commented Jan 30, 2024

Hello there,

we at pymeasure develop a communication protocol, called LECO (Laboratory Experiment Control PrOtocol), for the exchange of messages between small programs in order to separate a measurement program in small parts. The corresponding python implementation is pyLECO.

Reading the paper about yag, I figured, that LECO and yaq could complement each other:

  • yaq offers standardised daemons and how to address them
  • LECO offers communication without assigning (and remembering) port numbers to all the software parts, instead you can use human readable names.

Here the core concepts of LECO:

  • Communication uses ZMQ (based on TCP sockets)
  • There is one (or more) Coordinators through which all messages are sent
  • Each Component of the communication network has its own name, prefixed by the Coordinator it is connected to.
  • The LECO network can transport any bytes message from any Component to any other Component
  • For signing in etc. we use JSON-RPC, but everything else could be encoded differently, for example according to Apache AVRO RPC. The messages have an indicator flag for the encoding, exactly for such a situation.

Therefore, it should be quite simple to use LECO instead of TCP sockets as a backend in yaq.

@untzag
Copy link
Member

untzag commented Feb 9, 2024

Sounds like fun @BenediktBurger, it would be great to build some compatibility tooling. Interoperability in the Python for instrumentation space is a passion of mine. I really appreciate the work you're doing---I'm honored that you've thought of our project to include. I see that we're focusing on the same "small to medium-sized use cases". I am personally very happy with Bluesky for experimental orchestration. It would be nice to use community-maintained tooling around logging and GUI interfaces---I see you're thinking along those lines too.

As a first step, what if we create a special coordinator that "bridges" from yaq to LECO?

Another idea is to create a special coordinator that uses HAPPI [1] to manage Bluesky Protocol [2] objects under the hood. This would give us yaq for free (through yaqc-bluesky) but also might provide a bridge to other protocols via the Ophyd packages.

[1] https://github.com/pcdshub/happi
[2] https://blueskyproject.io/bluesky/hardware.html

@BenediktBurger
Copy link
Author

As a first step, what if we create a special coordinator that "bridges" from yaq to LECO?

Another idea is to create a special coordinator that uses HAPPI [1] to manage Bluesky Protocol [2] objects under the hood. This would give us yaq for free (through yaqc-bluesky) but also might provide a bridge to other protocols via the Ophyd packages.

LECO is not a protocol regarding instrument control, but is a protocol to exchange messages between different programs. It is a tool to connect different parts of an experimental setup. For example it allows to do "remote procedure calls".

The main feature of LECO is, to address other Components (be it on the same computer or other computer) by name and through a single interface while managing the sockets in the background.
It also comes with everything set up for using remote procedure calls, but yaq does not need that part, if I understand correctly.

I understand yaq sends bytes messages between the different parts via tcp sockets.

So instead of doing:

socket1.connect("ip:10001")
socket1.send(byte_object1)
socket2.connect("ip:10002")
socket2.send(byte_object2)
....

you can do

send_yaq(receiver="power_source", command=byte_object1)
send_yaq(receiver="oscilloscope", command=byte_object2)

So you do not have to remember which name has which port number and you do not have to create the different sockets manually.

My idea was, that you might include leco as an alternative to tcp sockets or maybe replacing them altogether. That way you don't have to worry about the connection management.

@untzag
Copy link
Member

untzag commented Feb 9, 2024

Okay, so your idea is to have the yaq daemons participating directly as LECO components, which might imply that avro-rpc needs to sit alongside json-rpc/openrpc as a message format for LECO?

My concern with that idea is that it's not easy to add the LECO-specific messages (SIGNIN, SIGNOUT, etc) to all the yaq daemons. A specialized coordinator that interfaces correctly with the broader LECO network but speaks pure yaq underneath would allow us to avoid changing the yaq specification to accommodate LECO. Such a coordinator could translate between json-rpc/openrpc and avro-rpc so that the broader LECO network wouldn't need to change at all---other coordinators would simply see a remote coordinator with a local directory and they wouldn't need to know about yaq at all. Do you think that's possible?

@BenediktBurger
Copy link
Author

Okay, so your idea is to have the yaq daemons participating directly as LECO components, which might imply that avro-rpc needs to sit alongside json-rpc/openrpc as a message format for LECO?

Yes, but that is not a problem.

My concern with that idea is that it's not easy to add the LECO-specific messages (SIGNIN, SIGNOUT, etc) to all the yaq daemons.

If you wanted to implement LECO from scratch, you would have to care about that. If you use the python package PyLECO, you can just use these utils as a foundation.

A specialized coordinator that interfaces correctly with the broader LECO network but speaks pure yaq underneath would allow us to avoid changing the yaq specification to accommodate LECO. Such a coordinator could translate between json-rpc/openrpc and avro-rpc so that the broader LECO network wouldn't need to change at all---other coordinators would simply see a remote coordinator with a local directory and they wouldn't need to know about yaq at all. Do you think that's possible?

A Coordinator just transmits messages from one Component to another one and does not care about the content, unless the Coordinator itself is the recipient of the message. That allows a Coordinator to transfer a message from one yaq daemon to another one without understanding avro at all.

We could develop a Coordinator, which translates a message from avro-rpc to json-rpc according to the recipients capabilities, but I do not think, that that is necessary.

We could make a Coordinator, which understands avro-rpc and json-rpc, such that the yaq daemons can sign in via avro-rpc (with LECO defined methods) to the Coordinator.

@BenediktBurger
Copy link
Author

Here's a sketch, how an Adapter (bridging from LECO to a daemon) could look like:

from logging import Logger
from zmq import Context
from pyleco.core import COORDINATOR_PORT
from pyleco.core.message import Message
from pyleco.utils.message_handler import MessageHandler


AVRO_MESSAGE = 7  # to be defined, for example


class YAQDaeomonAdapter(MessageHandler):
    def __init__(
        self,
        name: str,  # how to be reached from the LECO network
        daemon_port: int,
        host: str = "localhost",
        port: int = COORDINATOR_PORT,
        protocol: str = "tcp",
        log: Logger | None = None,
        context: Context | None = None,
        **kwargs,
    ) -> None:
        super().__init__(name, host, port, protocol, log, context, **kwargs)
        self.daemon_port = daemon_port

    def handle_message(self, message: Message) -> None:
        if message.header_elements.message_type == AVRO_MESSAGE:
            self.handle_avro_message(message)
        else:
            return super().handle_message(message)

    def handle_avro_message(self, message: Message):
        self.send_message_to_daemon(message.payload[0])  # first payload frame
        result = self.read_response_from_daemon()
        response = Message(
            receiver=message.sender,
            conversation_id=message.conversation_id,
            message_type=AVRO_MESSAGE,
        )
        response.payload = [result]
        self.send_message(response)

    def send_message_to_daemon(self, command: bytes) -> None: ...

    def read_response_from_daemon(self) -> bytes: ...
  • send_message_to_daemon should connect to the daemon and send it the bytes object as a client would do.
  • read_response_from_daemon should read the response as a client would do.

You can start this Adapter in another thread of the daemon, give it the daemon's port (as they are in the same process, no need for a different IP, I guess) and call:

adapter = YAQDaeomonAdapter("my_name", daemon_port=12345)
adapter.start_listen()

@BenediktBurger
Copy link
Author

If a program (e.g. a yaq client) wants to send an avro message, you can use the following utility:

from pyleco.core.message import Message
from pyleco.directors.director import Director


AVRO_MESSAGE = 7


class YAQDirector(Director):
    def send_avro_message(self, receiver: str|bytes, command: bytes) -> None:
        message = Message(receiver=receiver, message_type=AVRO_MESSAGE, data=command)
        self.communicator.send_message(message)

    def ask_avro_message(self, receiver: str|bytes, command: bytes) -> bytes:
        message = Message(receiver=receiver, message_type=AVRO_MESSAGE, data=command)
        response = self.communicator.ask_message(message)
        return response.payload[0]

@BenediktBurger
Copy link
Author

Another idea: The 'YaqDaemonAdapter' mentioned above could also interpret json rpc requests, send an appropriate avrò request to the daemon and return the result via json rpc.
Giving it the avrò schema file of the daemon, it could discover the methods automatically, I guess.
Thanks for the meeting @untzag, @ksunden.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants