-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to add more AUs? #8
Comments
I found these today 48 AU https://sharecg.com/v/92621/gallery/21/DAZ-Studio/FaceShifter-For-Genesis-8-Female For Daz G8 not MBLab. But it might help you. Still not 64 AU. And MBLab blendshapes will be different to DAZ blendshapes/morphs. I guess you know this? https://www.cs.cmu.edu/~face/facs.htm |
Hey @wdear MBLAB models come with their own Blend Shapes. Since these Blend Shapes are not compatible with FACS, I've created the .json files myself by going over all Blend Shapes available and comparing it to other resources such as the FACS manual. I only did the AUs matching OpenFace's output except for AU07. Personally I noticed that using AU07 would interfere with other AUs due to the limited nature of Blend Shapes available to MBLAB models. You can either add more .json files to "au_json" or, if you're not satisfied with the current conversion, create a new folder with .json files. E.g. "au_advanced". A good start is: https://imotions.com/blog/facial-action-coding-system/ Please make a pull request with the new .json files if you do decide to make them :) |
@NumesSanguis @dza6549
|
Unfortunately OpenFace has that limitation. When I inquired the makers of OpenFace about more/asymmetric AUs, they said they couldn't do this due to available AU databases not scoring intensity on all AUs, so no machine learning model can learn them. FACSvatar is not dependent on OpenFace, however. If some other input module send a message to the bridge module formatted in a similar way, all other components will still work like normal. That's the whole idea of the modular approach FACSvatar takes. It wants to be as general as possible.
|
It might be possible to train a new OpenFace model with synthetic data i.e. not pics of real people but pics of 3d heads. I certainly don't have the Machine Learning knowledge to optimize the design of a new OpenFace model. Unity has the ML-Agents module to TensorFlow and has been experimenting with synthetic data. See https://blogs.unity3d.com/2018/09/11/ml-agents-toolkit-v0-5-new-resources-for-ai-researchers-available-now/ Conceivably one might use Unity to produce the several tens of millions of images necessary for training a new model with the DAZ FaceShifter morphs from intheflesh mentioned above or the Polywink sample available here https://www.polywink.com/9-60-blendshapes-on-demand.html (which I think might be based on FACS) under a variety of lighting conditions, with synthetic backgrounds and diverse camera angles, etc. However optimizing the Unity/TF/PPO learning algorithm is sadly beyond my capacity. Also we need to consider how the extra capacity to detect additional AUs and asymmetric AUs will benefit the project. As we have discussed unless the model has the required blendshapes then the additional detection capacity will not be used. On the other hand extra detection capacity might motivate modellers to include more AU blendshapes in their 3d models. Another variable is that I can only find a limited subset of the complete set FACS AUs online. Most researchers appear to be using a subset and not the full set. In addition the emotional FACS used by others appears to be limited to only ~7 emotions. Additional emotions are mentioned here https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4734883/ This is all very interesting 👍 |
@dza6549 synthetic images might be a good resource (which could also be made in Blender), but the general rule in AI is that you need to train on real data as well, at least for signal input from the real world such as video. Synthetic data can definitely help make it stronger though. So a small database of videos with all AUs notated per frame + very large synthetic database seems like a good idea. Quality + quantity. A friend of mine actually has created an add-on called FACSHuman (paper) for MakeHuman. At some point he'll release it as open source. Personally I don't believe in facial configuration == emotion, as described by the theory of Paul Ekman. But if you're interested, a recent survey among emotion researchers (Ekman, P. What scientists who study emotion agree about. Perspectives on Psychological Science 11, 1 (2016), 31–34.): For me the Theory of Constructed Emotion by Lisa Feldman Barrett seems more plausible. But this is going off-topic in regards to the issue raised. If you want to continue about emotion theory, please post a topic here: https://www.reddit.com/r/FACSvatar/ and I'll be glad to continue ^_^ |
Hi NumesSanguis,
Thanks for your sharing.
To realize more detailed expressions of the avatar in Unity 3D,I think I need to transition more AUs to Blendshapes,which means more AU0X.json in file FACSvatar-master\modules\process_facstoblend\au_json (only 20 .json files there).
if I thought right,where can I get more AU0X.json files,e.g AU07.json ?Or should I write these files manually?if so, is there any mapping laws between AUs and blendshapes that I can follow at?
Thanks again.
The text was updated successfully, but these errors were encountered: