Skip to content

The purpose of this Android app is to utilize the Microsoft Face API to not only detect individual faces in an image, but also for each face provide information such as emotions, the estimated age, gender, and more. Possible applications for this app are at amusement parks, classrooms, and residential homes.

Notifications You must be signed in to change notification settings

ishaanjav/Face_Analyzer

Repository files navigation

Click me for Video Demonstration

Face Analyzer

The purpose of this Android app is to use the Microsoft Face API to not only detect individual faces in an image, but also provide information about facial attributes for each face such as emotions, estimated age, gender, and more. Possible applications for this app are at amusement parks, classrooms, and residential homes.

  1. Amusement parks can use the app to collect data about the type of audiences that rides gather based on age and other attributes in addition to analyzing the emotions of people before and after the ride.
  2. Furthermore, the app can be used in classrooms for analyzing student's faces when being taught. The teacher can then review the data about emotions to see whether students were able to understand, enjoy, or dislike the lesson.
  3. Finally, another application of the app is in residential homes where caretakers can regularly use the app to determine patients' emotions and store it in a database for later analyzation.


Usage:

The app is simple enough to use: the first page contains two button- one for taking the picture, the other for processing the picture. Hence, the app requires Camera Permission. Once the picture is taken, you can press the "Process" button and the app will use an AsyncTask and the Microsoft Face API to detect the faces in an image and get information about facial attributes such as age, headpose, gender, emotions, and more. (You can customize what data the app detects and analyzes by specifying it in FaceServiceClient.FaceAttributeType.MY_FACIAL_ATTRIBUTES which is located in the doInBackground method of the AsyncTask. For more details, check out the Detecting Particular Facial Attributes Section)

Once the image has been processed, it takes you to a second page, where for each person's face it detected in the image, it generates a thumbnail of the individual, and displays it in a ListView alongside the information analyzed from the previous page. Once again, the Microsoft Face API offers a variety of features which can be found at their site and you can choose what FaceAttributeType will be analyzed by specifying it in the AsyncTask.


Setup:

Please note that this app requires the use of Microsoft Azure's Face API. Without an API Key, you will not be able to use the app as it was intended. The following sections contain the full set of instructions to getting your own API key for free and using it in the app by changing a single line of code.

Downloading to Android Studio

You can fork this project on GitHub, download it, and then open it as a project in Android Studio. Once you have done so, it can be run on your Android device.

Making the Azure Account

In order to run the face dectection and analysis, you must get an API Subscription Key from the Azure Portal. This page by Microsoft provides the features and capabilities of the Face API. You can create a free Azure account that doesn't expire at this link here by clicking on the "Get API Key" button and choosing the option to create an Azure account.

Getting the Face API Key from Azure Portal

Once you have created your account, head to the Azure Portal. Follow these steps:

  1. Click on "Create a resource" on the left side of the portal.
  2. Underneath "Azure Marketplace", click on the "AI + Machine Learning" section.
  3. Now, under "Featured" you should see "Face". Click on that.
  4. You should now be at this page. Fill in the required information and press "Create" when done.
  5. Now, click on "All resources" on the left hand side of the Portal.
  6. Click on the name you gave the API.
  7. Underneath "Resource Management", click on "Manage Keys".

Hi

You should now be able to see two different subscription keys that you can use. Follow the additional instructions to see how to use the API Key in the app.

Using the API Key in the app

Head over to the MainActivity page in Android Studio since that is where the API Key will be used when creating the FaceServiceClient object. Where it says in onCreate:

faceServiceClient = new FaceServiceRestClient("<YOUR ENDPOINT HERE>", "<YOUR API SUBSCRIPTION KEY>"); 

replace <YOUR API SUBSCRIPTION KEY> with one of your 2 keys from the Azure Portal. (If you haven't gotten your API Key yet, read the previous two sections above). <YOUR ENDPOINT HERE> should be replaced with one of the following examples from this API Documentation link. The format should be similar to:

"https://<LOCATION>/face/v1.0"

where <LOCATION> should be replaced with something like uksouth.api.cognitive.microsoft.com or japaneast.api.cognitive.microsoft.com. All of these can be found, listed at this link.

Now that you have the Face API Key, you can use the app as it was intended. Please note that if you are using the free, standard plan, you can only make 20 API transactions/calls per minute. Therefore, if that limit is exceeded, you may run into runtime errors.

Detecting Particular Facial Attributes

The face analysis happens in the detectandFrame method of MainActivity.java. More specifically, detectandFrame -> AsyncTask -> doInBackground. This is what the code looks like for detecting head position, age, gender, emotion, and facial hair:

FaceServiceClient.FaceAttributeType[] faceAttr = new FaceServiceClient.FaceAttributeType[]{
     FaceServiceClient.FaceAttributeType.HeadPose,
     FaceServiceClient.FaceAttributeType.Age,
     FaceServiceClient.FaceAttributeType.Gender,
     FaceServiceClient.FaceAttributeType.Emotion,
     FaceServiceClient.FaceAttributeType.FacialHair
};

You can change it to something like FaceServiceClient.FaceAttributeType.hairColor. For more of the FaceAttributeTypes, you can check out one of the JSON files from the Face API page.

Now that you have detected the face attributes, you will have to change the CustomAdapter.java in order to display the results from the detection process. In the getView method, to get the facial attributes of a face, the code uses faces[position] to get an element in the array of type Face. Then, you can use faces[position].faceAttributes .faceAttribute to get information about a particular attribute. The code is below:

//Getting the Gender:
faces[position].faceAttributes.gender

//Getting facial hair information:
//Probability of having a beard:
faces[position].faceAttributes.facialHair.beard
//Probability of having sideburns:
faces[position].faceAttributes.facialHair.sideburns
//Probability of having a moustache:
faces[position].faceAttributes.facialHair.moustache
Please note that if you do not specify a certain face attribute to be detected, then doing faces[position].faceAttributes.thatFacialAttribute in the getView method will give you errors. Additionally, certain attributes, like Head Position and Facial Hair, have attributes within themselves such as faces[position].faceAttributes.facialHair.moustache which can end in moustache, sideburns, beard or faces[position].faceAttributes.headPose.yaw which can end in yaw, roll, pitch.

If you found this repository useful, you may benefit from checking out my YouTube Channel for learning Android app development. It is called IJ Apps and has over 50+ tutorials (as of June 20th) starting at the basics and then covering more advanced topics.

There is also a video demonstration of Face Analyzer on my channel if you would like to see how it works & how to use it.

Once again, check out the YouTube Channel and don't forget to subscribe!


Future Proofness:

This repository was posted on January 9th, 2019. Therefore, updates would have been made to the Face API since then. As of the time of posting, the project uses the following implementation in build.gradle:

//This can be changed for newer versions of the API. 
implementation 'com.microsoft.projectoxford:face:1.4.3'.

Feel free to make any changes to the app once you have cloned it and if you have any questions or issues, you can contact me at [email protected] or by raising an issue for this GitHub repository. I hope that you find this app useful and you enjoy using and testing it!

Furthermore, you may also want to check out some of my other repositories for Android apps:

  • Kairos Face Recognition: The purpose of this Android app is to use Kairos's SDK for Android in order to implement facial recognition. Features of this app include: registering and identifying users when given an image. README
  • GitHub Automated File Uploader: A command-line automation tool to upload files to GitHub, regardless of whether the files are in a git repository. Runs with a single command.
  • Fingerprint Authentication - A simple app that demonstrates how to use a device's fingerprint reader to authenticate a person's finger and identify it among existing fingerprints. README

About

The purpose of this Android app is to utilize the Microsoft Face API to not only detect individual faces in an image, but also for each face provide information such as emotions, the estimated age, gender, and more. Possible applications for this app are at amusement parks, classrooms, and residential homes.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages