Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where to run the modules (Robot or PC) and how they communicate? #3

Open
igor-lirussi opened this issue Jun 2, 2023 · 3 comments
Open

Comments

@igor-lirussi
Copy link
Owner

igor-lirussi commented Jun 2, 2023

Hey Igor, thank you for the interesting repo... I am trying to get it to work but I cannot figure out how the speech recognition (which is running fine on my pepper) communicates with the conversational engine on my computer?
In the notebook you provided all the code is supposed to be running on my laptop except from "robot part" right?
I woul be very glad for your help :)

Originally posted by @VasilisAvgoustakis in #2 (comment)

@igor-lirussi
Copy link
Owner Author

I suggest starting with the repo running everything on the computer, commanding the robot only to listen and to speak.
(Just acquire the robot IP, run the speech recognition program, and also run the notebook.)
The modules are separated to grant flexibility to run them in the robot or the computer.
But to run them on the robot minor changes are required and:

  • the performances will be slower than running on your pc
  • no difference in microphone/speakers, always the robot ones will be used

Examples

if you run the speech synthesis code in the robot, you don't need to open a session like the first part of SPEECH SYNTHESIS in the notebook, you can copy and paste the Python code after that and run it in the robot head computer.

if you run the speech recognition code in the robot you don't need the ProxyService to communicate, but you need to modify the SpeechReceiverModule

if you want to run the conversational engine in the robot should be sufficient move the Ab.jar file in it, and run also the speech recognition internally or, if the speech recognition is in the PC, change the way in the SpeechReceiverModule gets the answer from the conversational engine.

As you can see there are lots of combinations and I didn't cover all of them, but with minor adaptations can work.

NOTE: In the notebook, the "robot part" is only the code that requires the robot, that's why it's called like that, but the modules can be run anywhere.

@VasilisAvgoustakis
Copy link

Thank you so much for the clarification now I am much further with the process...but I still after running all the "robot part" code do get the error :
INF: ReceiverModule: started!
INF: ReceiverModule.del: cleaning everything
INF: ReceiverModule: stopping...
Exception RuntimeError: '\tproxy::proxy\n\tNULL values were passed as arguments' in <bound method DialogueSpeechReceiverModule.del of <main.DialogueSpeechReceiverModule; proxy of <Swig Object of type 'AL::module *' at 0x7fab14774b70> >> ignored

I cannot get what goes wrong... does that say something to you?

Thanks a lot for taking the time :)

@igor-lirussi
Copy link
Owner Author

Did you run also the speech recognition on a separate shell with python2:
conda activate python2
python module_speechrecognition.py --pip (your robot IP)
with your robot connected to the same network of your PC?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants