-
Notifications
You must be signed in to change notification settings - Fork 10
Advanced Behavior Classification
To reliably detect the ongoing behavior of experimental animals it is often necessary to define a complex definition of what is in essence the event of interest and what parameter changes are of interest. Luckily, machine learning based classification can be utilized to detect behavioral episodes reliable either from supervised approaches (e.g. with SiMBA) based on previous labelling of examples or from unsupervised approaches (e.g. with B-SOID) where the general observed behavioral space can be clustered into groups of mathematically different behaviors.
With the current stage of low-inference, "real-time", pose estimation (SLEAP, DLC-Live), it is possible to extract essential pose information and feedforward them directly into classifiers to enable online behavior detection. DLStream now incorporates interfaces that enable the integration of classifiers directly into closed-loop experiments, translating previous "offline" classification systems into "online" feedback (complex behavior-dependent triggers).
This allows you to conduct user-defined, complex behavior-dependent experiments using classifiers from your previous "offline" analysis pipeline.
This provides general information on how to load your classifiers into DLStream. For information regarding a specific type of classifier, please refer to the corresponding section down below.
-
Locate your classifier of interest and copy it's absolute path (e.g.
C:\Bsoid\models\bsoid_classifier.sav
) and add it to thesettings.ini
under[Classification]
PATH_TO_CLASSIFIER
. See below for an example of this section ofsettings.ini
. -
You have two options to run a classifier in DLStream:
- You can run a pool of classifiers (multiple classifiers running parallel) that will compensate for any "blind" periods by picking up pose estimation windows when all other classifiers in the pool are busy.
It is important to note, however, that this comes at a computational cost and can strain slow setups. The overall number of classifiers in the pool can be defined by the parameter POOL_SIZE
, but will only be used if the experiment specifically utilizes ClassificationPools
- we will explain these types of experiments later. We recommend to try this option.
You can always reduce the pool to
1
if you do not want to use multiple classifiers in parallel.
Example calculation: If your inference speed for pose estimation is 30 ms but your classifier takes 40 ms to classify ongoing behavior, it would be wise to use at least 2 classifiers to cover the short "blindspot".
- You can run it as is, one classifier, which will mean that during the run-time of the classifier any ongoing information will be skipped.
This means that depending on your setup, you will encounter some latency between pose estimation and triggering (which is normal), but additionally all pose estimation information during that period will not be classified. For fast setups this is usually no problem and even for slow setups this is a good solution to reduce computational resource cost.
[Classification]
PATH_TO_CLASSIFIER = C:\Bsoid\models\bsoid_classifier.sav
#time window used for feature extraction (currently only works with 15)
TIME_WINDOW = 15
#number of parallel classifiers to run, this is dependent on your performance time. You need at least 1 more classifier then your average classification time.
POOL_SIZE = 4
## B-SOID specific
# class/category of identified behavior to use as trigger (only used for B-SOID)
TRIGGER = 5
## SiMBA specific
#threshold to accept a classification probability as positive detection (SIMBA)
THRESHOLD = 0.7
#feature extraction currently works with millimeter not px, so be sure to enter the correct factor (as in simba).
PIXPERMM = 3.651
For information how to generate an unsupervised classifier using B-SOID, please refer to their official guidelines: B-SOID.
You can use your B-SOID trained classifiers directly in DLStream without further optimization. Here is how:
B-SOID classifiers can be used as is. Just follow the instructions above.
However, B-SOID classification is not a binary classification, but will output the most likely behavior motif/cluster so you can use the TRIGGER
parameter to set the motif that you want to react to.
It is also possible to change the trigger in the experiment, so that multiple behavior motifs can be used.
[Classification]
## B-SOID specific
# class/category of identified behavior to use as trigger (only used for B-SOID)
TRIGGER = 5
Currently we only support the standard 14 bp two animal pose estimation from SiMBA. Read the section Multiple Animal Experiments in DLStream if you are using identically looking animals for this. In the future we will update the available modes.
If you are not using the same pose estimation network for both applications, make sure that the body parts are in the same order!
For SiMBA 14 bp: Ear_left_1, Ear_right_1, Nose_1, Center_1, Lat_left_1, Lat_right_1, Tail_base_1, Ear_left_2, Ear_right_2, Nose_2, Center_2, Lat_left_2, Lat_right_2, Tail_base_2 Remember that you can use the same training data (labelled images) to train pose estimation networks that are optimized for "real-time" inference speed.
Also note that we are working together with the SiMBA developers to provide new "real-time" optimized feature extraction scripts that will further enhance SiMBA performance in DLStream! The feature extraction is currently the bottle-neck of SiMBA-based classification, so stay tuned!
If you are interested in using your own feature extraction script, get in touch!
For information how to generate a supervised classifier using SiMBA, please refer to their official guidelines: SiMBA.
SiMBA classifiers can be further optimized using pure-predict which lowers classification time even further.
For converting SiMBA classifiers into pure-SiMBA classifiers do the following:
Open the DeepLabStream/convert_classifier.py
file and enter the absolute path to your SiMBA classifier (e.g. C:\SiMBA\models\simba_classifier.sav) under
if __name__ == "__main__":
path_to_classifier = r"C:\SiMBA\models\simba_classifier.sav" <--- here
convert_classifier(path_to_classifier)
and run the script within your DLStream environment. It will save the pure-SiMBA Classifier in the same folder as the original with a _pure
suffix.
Then follow the above general setup guidelines.
SiMBA classification is a binary classification, so it will output the probability of its classification. Using the THRESHOLD
parameter, you can decide the threshold for a positive detection. This can also be adapted in the experiment.
[Classification]
## SiMBA specific
#threshold to accept a classification probability as positive detection (SIMBA)
THRESHOLD = 0.7
#feature extraction currently works with millimeter not px, so be sure to enter the correct factor (as in simba).
PIXPERMM = 3.651
Currently only custom experiments are supported to integrate behavior classification based on B-SOID or SiMBA derived classifiers. If you have not designed your own experiments yet, we recommend that you go through the following information before:
We are providing example experiments that can be adapted using both classifier types and will help you to design your experiment of choice.
Fundamentally, classifiers are the same as any other TRIGGER
module and can be used as such.
Let's dive into an example:
The BsoidClassBehaviorPoolTrigger
module is build the same as any other TRIGGER
module in DLStream except for one detail.
The ClassifierPool
is created in the Experiment
and is passed to the TRIGGER
when it is created.
See below for the complete example:
The same principle is true for the SIMBA version.
class BsoidClassBehaviorPoolTrigger:
"""
Trigger to check if animal's behavior is classified as specific motif with BSOID trained classifier.
"""
def __init__(self, target_class: int, class_process_pool, debug: bool = False):
"""
Initialising trigger with following parameters:
:param int target_class: target classification category that should be used as trigger. Must match "Group" number of cluster in BSOID.
If you plan to use the classifier for multiple trial triggers in the same experiment with different thresholds. We recommend setting up the
target_class during check_skeleton
:param class_process_pool: list of dictionaries with keys process: mp.Process, input: mp.queue, output: mp.queue;
used for lossless frame-by-frame classification
"""
self._trigger = target_class
self._process_pool = class_process_pool
self._last_result = [0]
self._feature_id = 0
self._center = None
self._debug = debug
self._skeleton = None
self._time_window_len = TIME_WINDOW
## Feature extraction is done in the classification pool
self.feat_extractor = None
self._time_window = deque(maxlen=self._time_window_len)
def fill_time_window(self, skeleton):
from utils.poser import transform_2pose
pose = transform_2pose(skeleton)
self._time_window.appendleft(pose)
def check_skeleton(self, skeleton, target_class: int = None):
"""
Checking skeleton for trigger, will pass skeleton window to classifier if window length is reached and
collect skeletons otherwise
:param skeleton: a skeleton dictionary, returned by calculate_skeletons() from poser file
:param target_class: optional, overwrites self._trigger with target probability. this is supposed to enable the
set up of different trials (with different motifs/categories) in the experiment without the necessity to init to
classifiers: default None
:return: response, a tuple of result (bool) and response body
Response body is used for plotting and outputting results to trials dataframes
"""
self.fill_time_window(skeleton)
"""Checks if necessary time window was collected and passes it to classifier"""
if len(self._time_window) == self._time_window_len:
self._feature_id += 1
self._process_pool.pass_time_window(
(self._time_window, self._feature_id), debug=self._debug
) #<---- the time_window will be passed to the parallel process pool where feature extraction and classification happen
# check if a process from the pool is done with the result
clf_result, feature_id = self._process_pool.get_result(debug=self._debug)
if clf_result is not None:
self._last_result = clf_result[0]
if target_class is not None:
self._trigger = target_class
# choosing a point to draw near the skeleton
self._center = skeleton[list(skeleton.keys())[0]]
# self._center = (50,50)
result = False
# text = 'Current probability: {:.2f}'.format(self._last_prob)
text = "Current Class: {}".format(self._last_result)
if self._last_result[0] == self._trigger:
result = True
text = "Motif matched: {}".format(self._last_result)
color = (0, 255, 0) if result else (0, 0, 255)
response_body = {
"plot": {"text": dict(text=text, org=self._center, color=color)}
}
response = (result, response_body)
return response
def get_trigger_threshold(self):
return self._trigger
def get_last_prob(self):
return self._last_prob
def get_time_window_len(self):
return self._time_window_len
The corresponding experiment can be found in DeepLabStream/experiments/custom/experiments.py
and is called SimbaBehaviorPoolExperiment
:
It follows the same principle as any other experiment, except one detail. It initiates a ProcessPool
that runs classifiers in parallel, rather than the usual approach.
The same experiment works with pure-SIMBA
See below for the complete example:
The same principle is true for the B-SOID version.
class SimbaBehaviorPoolExperiment:
"""
Test experiment for Simba classification
Simple class to contain all of the experiment properties and includes classification
Uses multiprocess to ensure the best possible performance and
to showcase that it is possible to work with any type of equipment, even timer-dependant
"""
def __init__(self):
"""Classifier process and initiation of behavior trigger"""
self.experiment_finished = False #<---- Initiate the ClassifierPool
self._process_experiment = ExampleProtocolProcess()
self._process_pool = FeatSimbaProcessPool(POOL_SIZE) #<---- Initiate the Classifier & Feature extraction Pool
# pass classifier to trigger, so that check_skeleton is the only function that passes skeleton
# initiate in experiment, so that process can be started with start_experiment
self._behaviortrigger = SimbaThresholdBehaviorPoolTrigger(
prob_threshold=THRESHOLD, class_process_pool=self._process_pool, debug=False
) #<---- Pass the ClassifierPool to the trigger
self._event = None
# is not fully utilized in this experiment but is usefull to keep for further adaptation
self._current_trial = None
self._trial_count = {trial: 0 for trial in self._trials}
self._trial_timers = {trial: Timer(10) for trial in self._trials}
self._exp_timer = Timer(600)
def check_skeleton(self, frame, skeleton): #<---- Same as in all other experiments
"""
Checking each passed animal skeleton for a pre-defined set of conditions
Outputting the visual representation, if exist
Advancing trials according to inherent logic of an experiment
:param frame: frame, on which animal skeleton was found
:param skeleton: skeleton, consisting of multiple joints of an animal
"""
self.check_exp_timer() # checking if experiment is still on
for trial in self._trial_count:
# checking if any trial hit a predefined cap
if self._trial_count[trial] >= 10:
self.stop_experiment()
if not self.experiment_finished:
for trial in self._trials:
# check for all trials if condition is met
# this passes the skeleton to the trigger, where the feature extraction is done and the extracted features
# are passed to the classifier process
result, response = self._trials[trial]["trigger"](
skeleton, target_prob=self._trials[trial]["target_prob"]
) #<---- Call the trigger as in all other experiments
plot_triggers_response(frame, response)
# if the trigger is reporting back that the behavior is found: do something
# currently nothing is done, expect counting the occurances
if result:
if self._current_trial is None:
if not self._trial_timers[trial].check_timer():
self._current_trial = trial
self._trial_timers[trial].reset()
self._trial_count[trial] += 1
print(trial, self._trial_count[trial])
else:
if self._current_trial == trial:
self._current_trial = None
self._trial_timers[trial].start()
self._process_experiment.set_trial(self._current_trial) #<---- Pass the trigger result to the stimulation process as in all other experiments
else:
pass
return result,response
@property
def _trials(self):
"""
Defining the trials
The trigger was already initiated in the beginning of the experiment, so we directly refer to it here!
You can use target_prob to overwrite the threshold for this classifier with every check_skeleton
"""
trials = {
"SimBA1": dict(
trigger=self._behaviortrigger.check_skeleton, target_prob=None, count=0
)
}
return trials
def check_exp_timer(self):
"""
Checking the experiment timer
"""
if not self._exp_timer.check_timer():
print("Experiment is finished")
print("Time ran out.")
self.stop_experiment()
def start_experiment(self):
"""
Start the experiment
"""
self._process_experiment.start() #<--- the stimulation process
self._process_pool.start() #<--- the process pool needs to start as well
if not self.experiment_finished:
self._exp_timer.start()
def stop_experiment(self):
"""
Stop the experiment and reset the timer
"""
self.experiment_finished = True
self._process_experiment.end() #<--- the stimulation process
self._process_pool.end() #<--- the process pool needs to end as well
print("Experiment completed!")
self._exp_timer.reset()
def get_trial(self):
"""
Check which trial is going on right now
"""
return self._event
def get_info(self):
""" returns optional info"""
info = self._behaviortrigger.get_last_prob()
return info
Apart from the TRIGGER
setup the experiment
can be adapted as any other experiment. Check out our other wiki pages to get design it to your needs:
Simple Behavioral Analysis (SimBA) – an open source toolkit for computer classification of complex social behaviors in experimental animals Simon RO Nilsson, Nastacia L. Goodwin, Jia Jie Choong, Sophia Hwang, Hayden R Wright, Zane C Norville, Xiaoyu Tong, Dayu Lin, Brandon S. Bentzley, Neir Eshel, Ryan J McLaughlin, Sam A. Golden bioRxiv 2020.04.19.049452; doi: https://doi.org/10.1101/2020.04.19.049452
B-SOiD: An Open Source Unsupervised Algorithm for Discovery of Spontaneous Behaviors Alexander I. Hsu, Eric A. Yttri bioRxiv 770271; doi: https://doi.org/10.1101/770271