3D is here: Point Cloud Library (PCL)
Radu Bogdan Rusu and Steve Cousins
Willow Garage
68 Willow Rd., Menlo Park, CA 94025, USA
{rusu,cousins}@willowgarage.com
Abstract— With the advent of new, low-cost 3D sensing
hardware such as the Kinect, and continued efforts in advanced
point cloud processing, 3D perception gains more and more
importance in robotics, as well as other fields.
In this paper we present one of our most recent initiatives in
the areas of point cloud perception: PCL (Point Cloud Library
– http://pointclouds.org). PCL presents an advanced
and extensive approach to the subject of 3D perception, and
it’s meant to provide support for all the common 3D building
blocks that applications need. The library contains state-of-
the art algorithms for: filtering, feature estimation, surface
reconstruction, registration, model fitting and segmentation.
PCL is supported by an international community of robotics
and perception researchers. We provide a brief walkthrough of
PCL including its algorithmic capabilities and implementation
strategies.
I. INTRODUCTION
For robots to work in unstructured environments, they need
to be able to perceive the world. Over the past 20 years,
we’ve come a long way, from simple range sensors based
on sonar or IR providing a few bytes of information about
the world, to ubiquitous cameras to laser scanners. In the
past few years, sensors like the Velodyne spinning LIDAR
used in the DARPA Urban Challenge and the tilting laser
scanner used on the PR2 have given us high-quality 3D
representations of the world - point clouds. Unfortunately,
these systems are expensive, costing thousands or tens of
thousands of dollars, and therefore out of the reach of many
robotics projects.
Very recently, however, 3D sensors have become available
that change the game. For example, the Kinect sensor for
the Microsoft XBox 360 game system, based on underlying
technology from PrimeSense, can be purchased for under
$150, and provides real time point clouds as well as 2D
images. As a result, we can expect that most robots in the
future will be able to ”see” the world in 3D. All that’s
needed is a mechanism for handling point clouds efficiently,
and that’s where the open source Point Cloud Library, PCL,
comes in. Figure 1 presents the logo of the project.
PCL is a comprehensive free, BSD licensed, library for
n-D Point Clouds and 3D geometry processing. PCL is
fully integrated with ROS, the Robot Operating System (see
http://ros.org), and has been already used in a variety
of projects in the robotics community.
II. ARCHITECTURE AND IMPLEMENTATION
PCL is a fully templated, modern C++ library for 3D
point cloud processing. Written with efficiency and per-
Fig. 1. The Point Cloud Library logo.
formance in mind on modern CPUs, the underlying data
structures in PCL make use of SSE optimizations heavily.
Most mathematical operations are implemented with and
based on Eigen, an open-source template library for linear
algebra [1]. In addition, PCL provides support for OpenMP
(see http://openmp.org) and Intel Threading Building
Blocks (TBB) library [2] for multi-core parallelization. The
backbone for fast k-nearest neighbor search operations is
provided by FLANN (Fast Library for Approximate Nearest
Neighbors) [3]. All the modules and algorithms in PCL pass
data around using Boost shared pointers (see Figure 2), thus
avoiding the need to re-copy data that is already present
in the system. As of version 0.6, PCL has been ported to
Windows, MacOS, and Linux, and Android ports are in the
works.
From an algorithmic perspective, PCL is meant to incor-
porate a multitude of 3D processing algorithms that operate
on point cloud data, including: filtering, feature estimation,
surface reconstruction, model fitting, segmentation, registra-
tion, etc. Each set of algorithms is defined via base classes
that attempt to integrate all the common functionality used
throughout the entire pipeline, thus keeping the implementa-
tions of the actual algorithms compact and clean. The basic
interface for such a processing pipeline in PCL is:
• create the processing object (e.g., filter, feature estima-
tor, segmentation);
• use setInputCloud to pass the input point cloud dataset
to the processing module;
• set some parameters;
• call compute (or filter, segment, etc) to get the output.
The sequence of pseudo-code presented in Figure 2 shows
a standard feature estimation process in two steps, where a
NormalEstimation object is first created and passed an input
dataset, and the results together with the original input are
then passed together to an FPFH [4] estimation object.
To further simplify development, PCL is split into a series
of smaller code libraries, that can be compiled separately:
• libpcl filters: implements data filters such as downsam-