Skip to content
tanaysaha edited this page Jan 11, 2020 · 36 revisions

helipad_det

The aim of this ROS package is to provide detection and pose estimation of an H-shaped helipad for the purpose of landing a UAV. The approach is based on this paper. It provides robust and accurate detection of the helipad from different angles and orientations.

The module detects the centre of a helipad by detecting the two circles around the 'H' by using the ratio of their radii and the 'H' itself.

The image is first converted to a grayscale image which is then blurred to reduce noise. Edges are then detected in the image which is morphologically opened to remove some false detections. Contours are then extracted from this.

Helipad Detection

The helipad considered here is a 'H' inside two concentric circles.

The Helipad

Idea

The Circles

From the contours of the image, we extract circles. To do that we obtain the center of the contour, and find the distances of the points of the contour from the centre. If the standard deviation follows the following property:

then the shape is a circle. This relation is obtained by modelling the distances of a circle from it's centre using a gaussian distribution, we assert that 95% of points have their radii within 10% deviation from the average. Now we check for concentricity by taking the ratio of the radii of two adjacent circles detected in the image and the distance between the centres. We can define our own parameters according to the geometry of the helipad at hand.

The 'H'

Given Below is the image of the Helipad.

The H

We obtain a closed contour of the H, given by a parameterized function gamma, where and each gamma represents a point of the contour on the image. We define the size of the contour to be it's Cardinality .

H Contour

Define a function , where would give us the 'Sharpness' around the particular point of the contour. This can be achieved by considering the distance of a particular point to the line connecting and . This distance will be close to 0 for points along relatively straight lines and give a maxima around corners. So,

is the required distance for each point on the contour. This gives us a 'signature' of that particular contour.

Given below is the representation of for on a straight line not near a corner followed by the case where it is near a corner.

The Signature of an experimentally detected H.

This signature is then processed to obtain only the peaks. We compare this signature with the signature of the required H by checking the ratio of the distance between the maxima and their contour size in their individual signatures. For the centre of the H, we use the centroid of all the corners obtained by taking the average of where is a point of maxima of for all .

Technical Approach

The raw image obtained from the subscribed image stream is converted to the grayscale colourspace. A Gaussian blur and adaptive thresholding are applied to the image so as to accentuate the H-shaped marker. The contours of this image are then generated. The contours are first passed through the circleDet function to obtain ones that are circular. From these circular contours we check for concentric circles as mentioned under the "Idea" section above. If the concentric circles are not detected or are no longer visible then we go for 'H' detection. The contours are analyzed with the pointToLineDistance function and stored in a vector. A smoothing operation is applied to the distance data and a corresponding signature is made. This signature is compared to an ideal H signature, allowing some degree of tolerance. The center of the H is computed in the image frame. This coordinate of the image is then transformed into the global frame.

Software Pipeline

The entire framework is built upon the robotics middleware ROS. The detection part of the package relies on the OpenCV library. We use the inbuilt functions for the pre-processing of the raw image obtained from the /usb_cam/image_raw topic. The preprocess.h header file contains the functions that perform these tasks. Contours are generated by the findContours inbuilt OpenCV function. We also impose a condition on the areas of the contours to minimize the processing and small erroneous detections.The contours are passed to the circleDet function which returns a cv::Scalar object of the form (x-centre, y-centre, radius). Using this and the approach mentioned under "Idea" section we obtain the circles. Of these circles we find the concentric ones by checking for ratio of the radii to be in a certain range and whether the centres are close. In our case we have taken 30 pixel distance error for the distance of centres and 1.2 for the ratio of radii. Now if the circles are not detected we go for the 'H' detection. For this, the contours are passed to the pointToLineDistance function which provides the distance data and stores it in a vector. The smooth function is used to obtain only the peak values and thus forms a signature vector for each of these contours. This function also takes care of noise and outlier values. Now that the signature is formed, the vector values are shifted such that the largest gap comes first. This new signature is now ready to be compared with the ideal ‘H’ signature. The lengths of the segments of the ‘H’ are compared with the corresponding lengths of the ideal ‘H’. Some amount of tolerance (some percentage of the total length of the contour) is allowed while comparing the lengths. Once the signature is confirmed to be the ‘H’ marker, the center of the contour is found and is to be used for pose estimation. The detected H is then published on the topic /detected_helipad.

Position Estimation

The function findPose under the pose_estimation header provides the position estimate which is then published to the /helipad_detection topic for further pose estimation in order to approach the helipad. The main objective is to get global coordinates for the centre of the helipad which is in the camera frame. To do this, we first obtain the required transformation matrices. The camera matrix is obtained from the camera parameters. Similarly another matrix quadToCam is constructed using the translation and rotation data from the parameters. A scaleUp diagonal matrix is made using the z value of postion from odometry. Finally a rotation matrix (quadToGLob) is made from the odometry data. With all this, we convert the centre coordinates to quad frame then to global frame.

References

https://link.springer.com/article/10.1007/s10846-018-0933-2