Micro-IMU-Based Motion Tracking System for Virtual Training
Yang Zhang
1
, Yunfeng Fei
2
, Lin Xu
1*
, and Guangyi Sun
1†
1. Institute of Robotics and Automatic Information System, Nankai University
Tianjin Key Laboratory of Intelligent Robotics, Tianjin, China
*
E-mail: xul@nankai.edu.cn,
†
E-mail: guangyi@nankai.edu.cn
2. Engineering Design and Research Institute of the Second Artillery Corps., Beijing, China
Abstract: This paper presents the development of a low cost wireless real-time inertial body tracking system for
virtual training. The system is designed to provide highly accurate human body motion capture and interactive
three-dimensional (3-D) avatar steering, by combining low cost MEMS inertial measurement units (IMUs),
wireless body sensor network (BSN), and Unity 3D virtual reality game engine. First, several wearable MEMS
IMU sensors are placed on user’s body and limbs according to human skeletal action, and each sensor performs a
9 degrees of freedom (DOF) tracking at a high-speed update rate. Second, a Zigbee-based BSN is designed to
support up to 20 MEMS IMU sensor nodes data transmission at 50 Hz sampling frequency. All collected sensors’
data are loaded to a Matlab-based PC program by means of serial port. In order to accurately estimate the local
orientation of each IMU sensor, an optimized gradient descent algorithm is implemented. The algorithm uses a
quaternion representation, which allows accelerometer and magnetometer data to be fused to compute the
gyroscope measurement error as a quaternion derivative. Finally, the estimated orientation data by fusion
algorithm are imported to a virtual environment, consisting of the 3-D virtual skeletal representation and the
virtual scene for specific training. Experimental results indicate that the system achieves < 1º static RMS error and
<2º dynamic RMS error. The systems further expand the usability of low cost body tracking solution to virtual
training in virtual environments.
Key Words: micro inertial measurement unit (IMU); motion capture; virtual training; gradient descent algorithm;
wireless sensor network
I. I
NTRODUCTION
Realistic virtual training system has attached great
attentions for decades in the field of military simulation.
Most existing systems adopt immersive technology to
simulate real world as a virtual scene which allows a trainee
to walk through and perform certain tasks inside. One of the
most promising technology in such a system is the human
motion tracking, i.e., motion capture (Mocap), which can
obtain accurate measurements of trainee body motion and
further transfer it onto the avatar built in the virtual scene.
Existing motion tracking systems for virtual training are
typically based on optical Mocap solution, which features
multiple high-speed and high-resolution optical cameras,
e.g., Vicon [1] and OptiTrack [2], providing very high
accurate in-room tracking. However, optical Mocap systems
usually cost hundreds of thousands dollars and require very
complicated calibration process, making them not feasible
for non-professional and unexperienced users. So far,
because of such a high investment and complex operation, it
has been extremely challenging to perform the virtual
training outside of a specific and expensive laboratory
environment.
An alternative low cost and non-obtrusive optical Mocap
solution for simple virtual training is to use one or multiple
Microsoft RGB-D camera Kinect [3, 4] as the tracking
sensor. The Microsoft Kinect for Windows Software
Development Kits (SDK) takes the Kinect data from the
sensor and provides a skeletal representation of recognized
users. However, all Kinect-based Mocap solutions suffer
from a vital restriction which is that the user must face
forward to the sensor, making Kinect not capable of
performing complex motion capture, e.g., 360 degree
freedom motion. Although adding more Kinect sensors or
using more advanced fusion algorithm can alleviate such a
forward facing restriction, many essential problems, e.g.,
occlusion, gesture recognition error, and limited sensing
range, cannot be completely eliminated.
Recently, low cost miniaturized MEMS inertial sensors,
e.g., accelerometer, gyroscope, and magnetometer, have
been successfully utilized in real-time human motion
tracking [5, 6, 7, 8]. Due to their acceptable accuracy,
relative low cost, light weight, compact in size, and easy to
use, these wearable MEMS inertial sensors rapidly become
one of the most promising solution for human motion
tracking in the field of film making, video game making,
everyday rehabilitation, and sports analysis. Meanwhile,
wireless body sensing network (BSN) technique provides a
much easier way to place these MEMS inertial sensors on to
a human body than previous wired configurations. However,
a fully wireless inertial Mocap system is still facing many
challenges, including the gyro drift and error accumulation,
number limitation of sensor due to the limited bandwidth,
poor transmission performance at a high sampling rate, and
multiple user involved interaction. More importantly,
inertial motion tracking has rarely been used for military
training.
In order to overcome these challenges, this paper
develops a multi-user wireless motion tracking systems for
virtual training, using customized inertial tracking nodes, a
wideband Zigbee BSN, and Unity 3D-based virtual
environment. The customized inertial tracking node consists
of a 9-axis MEMS IMU MPU9150 (a 3-axis accelerometer,
a 3-axis gyroscope, and a 3-axis magnetometer). The MCU
This work is supported by National Natural Science Foundation of China
(61174019, 51405245) and supported by National High-Tech R&D
Program of China (863 Program) (2013AA041102)
.
Proceedings of the 34th Chinese Control Conference
July 28-30, 2015, Hangzhou, China
7753