-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
get face point #14
Comments
Thanks for that ^-^ If you want to have access to the face points, you have 2 options, modify the GUI C++ code or (for cross-platform support) attach a ZeroMQ component to the Core code in C:
May I ask you why you prefer the 68 face points over AU units? |
Thank you for the help. I tried your methods and use my own cartoon model. Basically, the result is good. But I need more details of expression such as jawforward and so on. I find that this method can not drive some mouth animations. I want to use arkit 48 blendshapes and many of them can not derive only from AUs, so I want to get more information from original face points |
The limitation is not the FACS itself. OpenFace only supports a subset of 18 AUs. In total there are around ~40, depending whether you count head rotations and such. More AUs can be seen here: https://imotions.com/blog/facial-action-coding-system/. FACSvatar has only 17 implemented, because the only proper open source toolkit available at that time was OpenFace. The goal for FACSvatar is to support all AUs, however. FACSvatar relies on AUs, because they are software independent. For example, when you improve the AU tracker, no other code (in theory) would need adjustment. If you plan to convert those 48 dots to AU values, I would be glad to help :) |
Thanks. Yes you are right, the complete AUs can get good results, but the problem is just like you said, there is no complete AUs detection system right now. And ARKit has 48 blendshapes, not 48 points. So It can get wonderful result(you can find apple phone and unreal example https://www.youtube.com/watch?v=MfnNJaVCLM8 , this is really a perfect result!). I believe that using complete AUs can get similar results, but not for current time.I have thought of many others methods, such as curve matching, MPEG-4 facial system, and all of them need original points. |
zhaishengfu Did you find a solution? |
This is a great project. I wonder can I only get AUs from openface with zeromq? Can I get 68 face points from it?
The text was updated successfully, but these errors were encountered: