Chapter 1
Introduction
In this tutorial, we discuss the topic of position and orientation estimation using inertial sensors. We
consider two separate problem formulations. The first is estimation of orientation only, while the other is
the combined estimation of both position and orientation. The latter is sometimes called pose estimation.
We start by providing a brief background and motivation in §1.1 and explain what inertial sensors are
and give a few concrete examples of relevant application areas of pose estimation using inertial sensors.
In §1.2, we subsequently discuss how inertial sensors can be used to provide position and orientation
information. Finally, in §1.3 we provide an overview of the contents of this tutorial as well as an outline
of subsequent chapters.
1.1 Background and motivation
The term inertial sensor is used to denote the combination of a three-axis accelerometer and a three-
axis gyroscope. Devices containing these sensors are commonly referred to as inertial measurement units
(IMUs). Inertial sensors are nowadays also present in most modern smartphone, and in devices such as
Wii controllers and virtual reality (VR) headsets, as shown in Figure 1.1.
A gyroscope measures the sensor’s angular velocity, i.e. the rate of change of the sensor’s orientation.
An accelerometer measures the external specific force acting on the sensor. The specific force consists of
both the sensor’s acceleration and the earth’s gravity. Nowadays, many gyroscopes and accelerometers
are based on microelectromechanical system (MEMS) technology. MEMS components are small, light,
inexpensive, have low power consumption and short start-up times. Their accuracy has significantly
increased over the years.
There is a large and ever-growing number of application areas for inertial sensors, see e.g. [7, 59,
109, 156]. Generally speaking, inertial sensors can be used to provide information about the pose of any
object that they are rigidly attached to. It is also possible to combine multiple inertial sensors to obtain
information about the pose of separate connected objects. Hence, inertial sensors can be used to track
human motion as illustrated in Figure 1.2. This is often referred to as motion capture. The application
areas are as diverse as robotics, biomechanical analysis and motion capture for the movie and gaming
industries. In fact, the use of inertial sensors for pose estimation is now common practice in for instance
robotics and human motion tracking, see e.g. [86, 54, 112]. A recent survey [1] shows that 28% of the
contributions to the IEEE International Conference on Indoor Positioning and Indoor Navigation (IPIN)
make use of inertial sensors. Inertial sensors are also frequently used for pose estimation of cars, boats,
trains and aerial vehicles, see e.g. [139, 23]. Examples of this are shown in Figure 1.3.
There exists a large amount of literature on the use of inertial sensors for position and orientation
estimation. The reason for this is not only the large number of application areas. Important reasons are
also that the estimation problems are nonlinear and that different parametrizations of the orientation
need to be considered [47, 77], each with its own specific properties. Interestingly, approximative and
relatively simple position and orientation estimation algorithms work quite well in practice. However,
careful modeling and a careful choice of algorithms do improve the accuracy of the estimates.
In this tutorial we focus on the signal processing aspects of position and orientation estimation using
inertial sensors, discussing different modeling choices and a number of important algorithms. These
algorithms will provide the reader with a starting point to implement their own position and orientation
estimation algorithms.
3