Skip to content

Distributed depth camera system for point cloud stitching for ARENA

Notifications You must be signed in to change notification settings

conix-center/pointcloud_stitching

Repository files navigation

Pointcloud Stitching for ARENA Build Status

Overview

Scalable, multicamera distributed system for realtime pointcloud stitching in the ARENA (Augmented Reality Edge Network Area). This program is currently designed to use the D400 Series Intel RealSense depth cameras. Using the Librealsense 2.0 SDK, depth frames are grabbed and pointclouds are computed on the edge, before sending the raw XYZRGB values to a central computer over a TCP sockets. The central program stitches the pointclouds together and displays it a viewer using PCL libraries.

This system has been designed to allow 10-20 cameras to be connected simultaneously. Currently, our set up involves each RealSense depth camera connected to its own Intel i7 NUC computer. Each NUC is connected to a local network via ethernet, as well as the central computer that will be doing the bulk of the computing. Our current camera calibration method is to use a Theodolite, a survey precision instrument, to obtain real world coordinates of each camera (this will be updated soon I hope).

Installation

Different steps of installation are required for installing the realsense camera servers versus the central computing system. The current instructions are for running on Ubuntu 18.04.1 LTS.

Camera servers on the edge

  • Go to Librealsense Github and follow the instructions to install the Librealsense 2.0 SDK
  • Ensure that your cmake version is 3.1 or later. If not, download and install a newer version from the CMake website
  • Navigate to this pointcloud_stitching repository
    mkdir build && cd build
    cmake ..
    make && sudo make install
  • Install OpenCV sudo apt install libopencv-dev
  • Install OPENGL sudo apt-get install libglfw3-dev libgl1-mesa-dev libglu1-mesa-dev

Central computing system

  • Follow the instructions to download and install PCL libraries from their website.
  • Install python3 and other packages (Needed to run the camera-registration script)
    sudo apt-get update
    sudo apt-get install python3.6
    sudo apt-get install python3-pip
    python3 -m pip install --user scipy
    python3 -m pip install --user numpy
    python3 -m pip install --user pandas
  • Navigate to this pointcloud_stitching repository
    mkdir build && cd build
    cmake .. -DBUILD_CLIENT=true
    make && sudo make install

Usage

Each realsense is connected to an Intel i7 NUC, which are all accessible through ssh from the ALAN (central) computer. To start running, go through each ssh connection and run pcs-camera-server. If the servers are setup correctly, each one should say "Waiting for client...". Then on the ALAN computer, run "pcs-multicamera-client -v" to begin the pointcloud stitching (-v for visualizing the pointcloud). For more available options, run "pcs-multicamera-client -h" for help and an explanation of each option.

Build Camera Server mkdir build && cd build && cmake .. && make && make install

Optimized Code

To run the optimized version of pcs-camera-server, you will want to run pcs-camera-optimized. This contains benchmarking tools that show the runtime of the optimized version of processing the depth frames, performing transforms, and then converting the values. It also includes the theoretical FPS, but this is calculated without taking in to consideration the time it takes to grab a frame from the realsense as well as the time it takes to send the data over the network. To run the optimized code, run pcs-camera-optimized

  • Usage:
    -f <file> sample data set to run
    -s send data to central camera server if available
    -m use SIMD instructions
    -t <n> use OpenMP with n threads