node, q
near
, in the tree is located, and if q
rand
and the
shortest path from q
rand
to q
near
are in C
free
, then q
rand
is added to the tree. The tree growth is stopped when a node
is found near q
goal
. To speed up convergence, the search is
usually biased to q
goal
with a small probability.
In [7], two new features are added to RRTs. First, the
EXTEND function is introduced, which, instead of trying
to add directly q
rand
to the tree, makes a motion towards
q
rand
and tests for collisions.
Then a greedier approach is introduced, which repeats
EXTEND until an obstacle is reached. This ensures that most
of the time, we will be adding states to the tree, instead of
just rejecting new random states. The second extension is
the use of two trees, rooted at q
init
and q
goal
, which are
grown towards each other. This significantly decreases the
time needed to find a path.
B. ERRT
The execution extended RRT presented in [3] introduces
two RRTs extensions to build an on-line planner: the way-
point cache and the adaptive cost penalty search, which
improves re-planning efficiency and the quality of generated
paths. The waypoint cache is implemented by keeping a
constant size array of states, and whenever a plan is found,
all the states in the plan are placed in the cache with random
replacement. Then, when the tree is no longer valid, a new
tree must be grown, and there are three possibilities for
choosing a new target state. With probability P[goal], the
goal is chosen as the target; With probability P[waypoint], a
random waypoint is chosen, and with remaining probability
a uniform state is chosen as before. Values used in [3] are
P[goal]= 0.1 and P[waypoint]= 0.6.
In the other extension — the adaptive cost penalty search —
the planner dynamically modifies a parameter β to help it
finding shorter paths. A value of 1 for β will always extend
from the root node, while a value of 0 is equivalent to the
original algorithm. Unfortunately, the solution presented in
[3] lacks of implementation details and experimental results
on this extension.
C. Dynamic RRT
The Dynamic Rapidly-exploring Random Tree (DRRT)
described in [4] is a probabilistic analog to the widely used
D* family of algorithms. It works by growing a tree from
q
goal
to q
init
. The principal advantage is that the root of
the tree does not have to be changed during the lifetime
of the planning and execution. Also, in some problem
classes the robot has limited range sensors, thus moving
obstacles (or new ones) are typically near the robot and
not near the goal. In general, this strategy attempts to trim
smaller branches and farther away from the root. When new
information concerning the configuration space is received,
the algorithm removes the newly-invalid branches of the
tree, and grows the remaining tree, focusing, with a certain
probability(empirically tuned to 0.4 in [4]) to a vicinity
of the recently trimmed branches, by using the a similar
structure to the waypoint cache of the ERRT. In experimental
results DRRT vastly outperforms ERRT.
D. MP-RRT
The Multipartite RRT presented in [14] is another RRT
variant which supports planning in unknown or dynamic
environments. The MP-RRT maintains a forest F of dis-
connected sub-trees which lie in C
free
, but which are not
connected to the root node q
root
of T , the main tree. At the
start of a given planning iteration, any nodes of T and F
which are no longer valid are deleted, and any disconnected
sub-trees which are created as a result are placed into F .
With given probabilities, the algorithm tries to connect T
to a new random state, to the goal state, or to the root of
a tree in F . In [14], a simple greedy smoothing heuristic
is used, that tries to shorten paths by skipping intermediate
nodes. The MP-RRT is compared to an iterated RRT, ERRT
and DRRT, in 2D, 3D and 4D problems, with and without
smoothing. For most of the experiments, MP-RRT modestly
outperforms the other algorithms, but in the 4D case with
smoothing, the performance gap in favor of MP-RRT is
much larger. The authors explained this fact due to MP-RRT
being able to construct much more robust plans in the face
of dynamic obstacle motion. Another algorithm that utilizes
the concept of forests is the Reconfigurable Random Forests
(RRF) presented in [10], but without the success of MP-RRT.
III. A MULTI-STAGE PROBABILISTIC ALGORITHM
In highly dynamic environments, with many (or a few but
fast) relatively small moving obstacles, regrowing trees are
pruned too fast, cutting away important parts of the trees
before they can be replaced. This reduce dramatically the
performance of the algorithms, making them unsuitable for
these class of problems. We believe that a better performance
could be obtained by slightly modifying a RRT solution
using simple obstacle-avoidance operations on the new col-
liding points of the path by informed local search. Then, the
path could be greedily optimized if the path has reached the
feasibility condition.
A. Problem Formulation
At each time-step, the proposed problem could be defined
as an optimization problem with satisfiability constraints.
Therefore, given a path our objective is to minimize an
evaluation function (i.e. distance, time, or path-points), with
the C
free
constraint. Formally, let the path ρ = p
1
p
2
. . . p
n
a sequence of points, where p
i
∈ R
n
a n-dimensional point
(p
1
= q
init
, p
n
= q
goal
), O
t
∈ O the set of obstacles
positions at time t, and eval : R
n
× O 7→ R an evaluation
function of the path depending on the object positions. Then,
our ideal objective is to obtain the optimum ρ∗ path that