Skip to content

A Bonsai interface for real-time multi-animal pose tracking using SLEAP

License

Notifications You must be signed in to change notification settings

bonsai-rx/sleap

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Bonsai - SLEAP

logo

Bonsai.SLEAP is a Bonsai interface for SLEAP allowing multi-animal, real-time, pose and identity estimation using pretrained network models stored in a Protocol buffer (.pb) format.

Bonsai.SLEAP loads these .pb files using TensorFlowSharp, a set of .NET bindings for TensorFlow allowing native inference using either the CPU or GPU. By using the .pb file and the corresponding configuration file (training_config.json), the PredictFullModelPose operator from Bonsai.SLEAP will push the live image data through the inference network and output a set of identified poses from which you can extract an object id and specific object part position. Bonsai can then leverage this data to drive online effectors or simply save it to an output file.

The Bonsai.SLEAP package came about following a fruitful discussion with the SLEAP team during the Quantitative Approaches to Behaviour.

How to install

Bonsai.SLEAP can be downloaded through the Bonsai package manager. In order to get visualizer support, you should download both the Bonsai.SLEAP and Bonsai.SLEAP.Design packages. However, in order to use it for either CPU or GPU inference, you need to pair it with a compiled native TensorFlow binary. You can find precompiled binaries for Windows 64-bit at https://www.tensorflow.org/install/lang_c.

To use GPU TensorFlow (highly recommended for live inference), you also need to install the CUDA Toolkit and the cuDNN libraries. The current SLEAP package was developed and tested with CUDA v11.3 and cuDNN 8.2. Additionally, make sure you have a CUDA compatible GPU with the latest NVIDIA drivers.

After downloading the native TensorFlow binary and cuDNN, you can follow these steps to get the required native files into the Extensions folder of your local Bonsai install:

  1. The easiest way to find your Bonsai install folder is to right-click on the Bonsai shortcut > Properties. The path to the folder will be shown in the "Start in" textbox;
  2. Copy tensorflow.dll file from either the CPU or GPU tensorflow release to the Extensions folder;
  3. If you are using TensorFlow GPU, make sure to add the cuda/bin folder of your cuDNN download to the PATH environment variable, or copy all DLL files to the Extensions folder.

How to use

Bonsai.SLEAP currently implements the following SLEAP networks through the correspondent Bonsai operator:

  • centroid:
    • Input : full frame with potentially multiple objects
    • Output : collection of multiple detected centroids in the input image
    • Operator : PredictCentroid
  • top-down-model:
    • Input : full frame with potentially multiple objects
    • Output : collection of detected poses (centroid + body parts) from multiple objects in the image
    • Operator : PredictPoses
  • top-down-id-model:
    • Input : full frame with potentially multiple objects
    • Output : collection of detected poses (centroid + body parts) plus identities from multiple objects in the image
    • Operator : PredictPoseIdentities
  • single_instance:
    • Input : croped instance with a single object in the input image
    • Output : returns a single pose (body parts)
    • Operator : PredictSinglePose

The general Bonsai workflow will thus be:

Bonsai_Pipeline

Additional information can be extracted by selecting the relevant structure fields.

Bonsai_Pipeline_expanded

In order to use the Predict operators, you will need to provide the ModelFileName to the exported .pb file folder containing your pretrained SLEAP model, along with the corresponding PoseConfigFileName to the training_config.json file.

If everything works out, you should see some indications in the Bonsai command line window about whether the GPU was successfully detected and enabled. The first frame will cold start the inference graph which may take a bit of time, but after that your poses should start streaming through!

SLEAP installation

For all questions regarding installation of SLEAP, please check the official docs.