3D Object Detection for TIAGo Robot: using a Faster R-CNN Network

Introduction and Motivation

Figure 1: Products in the retail store. The products are labeled from 0 to 4 from left to right.

Design: 2D object detection

The model

Figure 2: Faster R-CNN, the unified object detection network, Source:[7]
Figure 3: Anchor boxes at (320,320), Source:[8]

Prepare the data

Figure 4: Annotation process of the items

Training the model

Results

Figure 5: cls and bbox loss plot
Figure 6: bbox_mAP plotted per epoch

Deploy the network in ROS

def generate_obj(self, result, id, msg):
obj = Detection2D()
obj.header = msg.header
obj.source_img = msg
result = result[0]
obj.bbox.center.x = (result[0] + result[2]) / 2
obj.bbox.center.y = (result[1] + result[3]) / 2
obj.bbox.size_x = result[2] — result[0]
obj.bbox.size_y = result[3] — result[1]
obj_hypothesis = ObjectHypothesisWithPose()
obj_hypothesis.id = str(id)
obj_hypothesis.score = result[4]
obj.results.append(obj_hypothesis)
return obj
im = np.frombuffer(msg.data, dtype = np.uint8).reshape(msg.height, msg.width, -1)
Figure 7: Faster R-CNN deployed in ROS

Design: 3D object detection

Figure 8: F-PointNet strategy, Source: [9]

Filter the object point cloud out

sensor_msgs::PointCloud2 pcs_filtered;
int POINT_STEP = latestPointClouds.point_step; // NOTE: 32, actually
size_t tc_u_min = int(tc_u — tc_size_u / 2) * POINT_STEP;
size_t tc_v_min = int(tc_v — tc_size_v / 2);
size_t tc_u_max = tc_u_min + tc_size_u * POINT_STEP;
size_t tc_v_max = tc_v_min + tc_size_v;
pcs_filtered.header.frame_id = latestPointClouds.header.frame_id;
pcs_filtered.header.stamp = ros::Time::now();
pcs_filtered.height = tc_size_v;
pcs_filtered.width = tc_size_u;
pcs_filtered.fields = latestPointClouds.fields;
pcs_filtered.is_bigendian = latestPointClouds.is_bigendian;
pcs_filtered.point_step = POINT_STEP; //32
pcs_filtered.row_step = POINT_STEP * tc_size_u;
pcs_filtered.is_dense = latestPointClouds.is_dense;
int raw_row_step = latestPointClouds.row_step;
decltype(latestPointClouds.data) filtered_data = {};
for(size_t row = tc_v_min; row < tc_v_max; row++)
{
for(size_t col = tc_u_min; col < tc_u_max; col++)
{
auto dat = latestPointClouds.data[row * raw_row_step + col];
filtered_data.push_back(dat);
}
}
pcs_filtered.data = filtered_data;

Calculate the 3D pose

pcl::SACSegmentationFromNormals<PointT, pcl::Normal> seg;
pcl::PointIndices::Ptr inliers(new pcl::PointIndices);
seg.setOptimizeCoefficients(true);
seg.setModelType(pcl::SACMODEL_CYLINDER);
seg.setMethodType(pcl::SAC_RANSAC);
seg.setNormalDistanceWeight (0.1);
seg.setMaxIterations(10000);
seg.setDistanceThreshold(0.05);
seg.setRadiusLimits(minRadius, maxRadius);
seg.setInputCloud(inputCloud);
seg.setInputNormals(normals);
seg.segment(*inliers, *cylinderCoefficients);

Communication overview

Figure 9: Ros Nodes

Results

Animation: Object Detection in ROS
Animation: Point Cloud of the sprite can

Conclusion

Source:

--

--

--

MSc Student Robotics at the Technical University of Delft.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Machine Learning Model inside Docker Container

Computational Complexities of ML Algos

BERT Embedding for Classification

Credit Card Fraud Classification

Machine Learning: 1.2

Reinforcement Learning w/ Keras + OpenAI: DQNs

How To Use TextBlob for Natural Language Processing

The Science Behind Machine Learning In Multi-Touch Attribution

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Aakaash Radhoe

Aakaash Radhoe

MSc Student Robotics at the Technical University of Delft.

More from Medium

Make Tensorflow’s Object-Detection Validation a true Post-Process

Yolov5 TFLite — Inferencing in Mobile Devices

Pytorch 101 & More…

Real-Time Object Detection with YOLO