Efficient Sparse Pose Adjustment for 2D Mapping.pdf

所需积分/C币:30 2017-11-06 11:09:29 1.42MB PDF

Pose graphs have become a popular representation for solving the simultaneous localization and mapping (SLAM) problem. A pose graph is a set of robot poses connected by nonlinear constraints obtained from observations of features common to nearby poses. Optimizing large pose graphs has been a bottle
optimized configuration(updated poses) of accuracy of the estimate and execution time, has a major constraints impact on the overall mapping system sensor data front-end back-end this paper we describe in detail an efficient and com- (graph-construction) poses optimization pact optimization approach that operates on 2D graphs. Our algorithm can be coupled with arbitrary front-ends that handle Fig. 2. Typical graph-based SLAM system. The front-end and the back end are executed in an interleaved wa different kinds of sensors. For clarity of presentation we shortly describe a front-end for laser data. However. the To summarize the paper: we propose an efficient approach general concepts can be straightforward y applied to different for optimizing 2D pose graphs that uses direct sparse Cholesky sensors decomposition to solve the linear system. The linear system IV. SPARSE POSE ADJUSTMENT is computed in a memory-efficient way that minimizes cache To optimize a set of poses and constraints, we use the well misses and thus significantly improves the performance. We known Levenberg-Marquardt (LM) method as a framework, compare our method, in both accuracy and speed, to existing with particular implementations that make it efficient for the LM and non-LM approaches that are avaiable, and show sparse systems encountered in 2D map building. In analogy that SPA outperforms them. Open source implementations are to the Sparse Bundle Adjustment of computer vision, which available both in C++ and in matlab/octa is a similarly efficient implementation of lM for cameras and Efficient direct (non-iteralive) algorithms to solve sparse features, we call our system Sparse Pose Adjustment(SPA) systems have become available [3], thus reviving a series of approaches for optimizing the graphs which have been A. Error formulation discarded in the past. In this paper, The variables of the system are the set of global poses c of the robot, parameterized by a translation and angle III. SYSTEM FORMULATION C Bil=[ci,Ji, Bil. A constraint is a measurement Popular approaches to solve the slam problem are the so- of one node c from another's(ci) position The measured called"graph-based"or"network-based"methods. The idea is offset between Ci and Ci, in Cis frame, is Eii, with precision to represent the history of the robot measurements by a graph. matrix Ai;(inverse of covariance ) For any actual poses of ci Every node of the graph represents a sensor measurement and Ci, their offset can be calculated as the measurement was taken An edge between two nodes h(c;,c;) Ri(ti-ti 0;- encodes the spatial information arising from the alignment of the connected measurements and can be regarded as a spatial Here Ri is the 2x2 rotation matrix of Oi. h(ci, Cj)is called the constraint between the two nodes measurement eguation In the context of graph-based SLAM, one typically con- The error function associated with a constraint. and the total siders two different problems. The first one is to identify the error, are constraints based on sensor data. This so-called data associ ei≡;-h(c1,C) ation problem is typically hard due to potential ambiguities or symmetries in the environment. A solution to this problem x2(c,p)=∑cA (2) is often referred to as th the SLAM front-end and it directly Note that the angle parameter in h(ci, ci) is not unique, deals with the sensor data. The second problem is to correct the poses of the robot to obtain a consistent map of the Since adding or subtracting 2 yields the same result. angle environment given the constraints This part of the approach is differences are always normalized to the interval (-T, 7 ]when often referred to as the optimizer or the slam back-end Its they occur task is to seek for a configuration of the nodes that maximizes B. Linear System the likelihood of the measurements encoded in the constraints The optimal placement of c is found by minimizing the An alternative view to this problem is given by the spring- total error in Equation 2. A standard method for solving this mass model in physics. In this view, the nodes are regarded as problem is Levenberg-Marquardt (LM), iterating a linearized masses and the constraints as springs connected to the masses. solution around the current values of c. The linear system is The minimal energy configuration of the springs and masses formed by stacking the variables c into a vector x, and the describes a solution to the mapping problem error functions into a vector e. Then we define During its operation a graph-based SLAM system inter leaves the execution of the front-end and of the back -end as shown in Figure 2. This is required because the front-end needs to operate on a partially optimized map to restrict the search m about potential constraints. The more accurate the current ae estimate is, the more robust the constraints generated by the front-end will be and the faster its operation dingly the performance of the optimization algorithIn, neasured in terms H≡JAJ he lm system is will depend on the density of the Cholesky factor, which (H+ diag)△x=JAe (4) in turn depends on the structure of H and the order of its variables. Mahon et al. llg have analyzed the behavior of the Here a is a small positive multiplier that transitions between Cholesky factorization as a function of the loop closures in gradient descent and Newton-Euler methods Gradient descent the SLAM system. If the number of loop closures is constant, is more robust and less likely to get stuck in local minima, but then the Cholesky factor density is O(n), and decomposition converges slowly: Newton-Euler has the opposite behavior is O(n). If the number of loop closures grows linearly with the The matrix is formed by adding four components for number of variables, then the Cholesky factor density grows each measurement h(ci, C,) as O(n)and decomposition is O(n) E. Compressed Column storage Ji ijji Each iteration of the lM algorithm has three steps: setting (5)up the linear system, decomposing H, and finding 4. c by AijJ JT JjJ back-substitution. Setting up the system is linear in the number of constraints(and hence in the number of variables for most graph-based SLAM systems). In many situations it can be the more costly part of the linear solver. Here we outline an Here we have abused the notation for slightly, with Ji efficient method for setting up the sparse matrix form of H being the Jacobian of eij with respect to the variable Ci. from the constraints generaled by Equation(5) The components are all 3x3 blocks. The right-hand side is d by adding 3xl blocks C, Aijeij and c, Aijeij for ea CSparse uses compressed column storage(CCs) format for orme sparse matrices. The figure below shows the basic idea constraint Solving the linear equation yields an increment Ax that can 1040 be added back in to the current value of x as follows col ptr 0502 0001 i row ind03 1 30 t=t;+△t (6)6800 val 65842 6;=6;+△b2 (8) C. Error Jacobians Each nonzero entry in the array is placed in the val vector Jacobians of the measurement function h appear in the Entries are ordered by column first, and then by row within normal equations (4), and we list them here the column. col ptr has one entry for each column, plus a last entry which is the number of total nonzeros(nnz). The OR2062:(t-t1) col ptr entry for a column points to the start of the column in 00 the row _ind and val variables. Finally, row_ind gives the row de index of each entry within a column 001 00 ccs format is storage-efficient but is difficult to create (7) incrementally, since each new nonzero addition to a column causes a shift in all subsequent entries. The most efficient D. Sparsity way would be to create the sparse matrix in column-wise We are interested in large systems, where the number of order, which would require cycling through the constraints poses c can be 10k or more(the largest real-world indoor c times. Instead, we go through the constraints just once dataset we have been able to find is about 3k poses, but we and store each 3x3 block Ji MijI in a special block-oriented can generate synthetic datasets of any order). The number of data structure that parallels the CCs format. The algorithm is system variables is 3 cll, and the H matrix is cl, or over given in Table I In this algorithm, we make a pass through the 108 elements. Manipulating such large matrices is expensive. constraints to store the 3x3 block matrices into C++ std: map Fortunately, for typical scenarios the number of constraints data structures, one for each column. Maps are efficient at grows only linearly with the number of poses, so that H is ordered insertion based on their keys, which is the row index very sparse. We can take advantage of the sparsity to solve Once this data structure is created(step (2)), we use the the linear problem more efficientl ordered nature of the maps to create the sparse ccs format of For solving (4) in sparse format, we use the SParse H by looping over each map in the order of its keys, first package [3]. This package has a highly-optimized Cholesky to create the column and row indices, and then to put in decomposition solver for sparse linear systems. It employs the values. The reason for separating the column/row creation several strategies to decompose H efficiently, including a from value insertion is because the former only has to be done logical ordering and an approximate minimal degree(AMD) once for any set of iterations of LM algorithm to reorder variables when H is large Note that only the upper triangular elements of h are stored In general the complexity of decomposition will be O(n) since the Cholesky solver in CSparse only looks at this part, in the number of variables. For sparse matrices, the complexity and assumes the matrix is symmetric TABLE I TABLE II ALGORITHM FOR SETTING UP THE SPARSE H MATRIX CONTINUABLE LM ALGORITHM H=CreateSparse(e, Cf) Continuablelm(c,e,入 Inpur: sct of constraints eig, and a list of frcc nodes(variables Outpur: sparse upper triangular H matrix in CCS format Input: nodes c and constraints e. and diagonal increment X Output: updated c I)Initialize a vector of size cf ll of C++ std: map's; each map 1)If X=0, set a to the stored value from the previous run is associated with the corresponding column of H. The key of 2)Set up the sparse H matrix using Create Sparse(e, c-co), with the Inap is the row index, and the data is an eimply 3x3 Latrix as the fixed Let map[i, 3) stand 3)Solve(H+AdiagH)Ax=J Ae, using sparse Cholesky 2) For each constraint eij, assuming i<3 with aMd a) In the steps below, create the map entries if they do not 4)Update the variables c-co using Equation(6) exist 5) If the error e has decreased, divide a by two and save, and b)If ci is free, Ilap[i, i]+= Ai; Ji return the updated poses Tor c-co c)If ci is frcc, map[3, j]+=J; AijJi 6)If the error e has increased, multiply a by two and save, and d)If ci, Ci are free, map[j, i]+=J, Aij; return the original poses for c-co 3)Set up the sparse upper triangular Matrix H a)In the steps below, ignore elements of the 3x3 map i, i entries that are bclow thc diagonal an accurate covariance. The method allows either a single b)Go through map in column then row order, and set scan or set of aligned scans to be matched against another up col ptr and row _ind by determining the number of elements in each coluInlll, and their row positions single scan or set of aligned scans. This method is used in c)Go through mapl again in column then row order, and the SRIs mapping system Karto for both local matching of insert entries into val sequentially. sequential scans, and loop-closure matching of sets of scans as in [12]. To generate the real-world datasets for experiments, F. Continuable LM System we ran Karto on 63 stored robot logs of various sizes, using its scan-matching and optimizer to build a map and generate The Lm system algorithm is detailed in Table II. It does constraints, including loop closures. the graphs were saved one step in the Lm algorithm, for a set of nodes c with and used as input to all methods in the experiments associated measurements. Running a single iteration allows for incremental operation of LM, so that more nodes can b VI.EⅩ PERIMENTS added between iterations. The algorithm is continuable in that In this section, we present experiments where we compare a is saved between iterations, so that successive iterations Spa with state-of-the art approaches on 63 real world datasets can change X based on their results. The idea is that adding and on a large simulated dataset. We considered a broad variety a few nodes and measurements doesnt change the system of approaches, including the best state-of-the-art hat much. so the value of a has information about the state Information filter: DSIF [7I of gradient descent VS. Euler-Newton methods. When a loop Stochastic gradient descent: TORO [10 closure occurs, the system can have trouble finding a good Decomposed nonlinear system: Treemap [8] minima. and a will tend to rise over the next few iterations to Sparse pose adjustment: SPA, with (a) sparse direct start. the system down a good path Cholesky solver, and (b) iterative PCG [15 There are many ditferent ways of adjusting A; we choose a We updated the PCG implementation to use the same"con simple one. The system starts with a small lambda, 104. If tinued LM method as spa; the only difference is in the the updated system has a lower error than the original, X is underlying linear solver. The preconditioner is an incomplete halved. If the error is the same or larger, d is doubled. This Cholesky method, and the conjugate gradient is implemented works quite well in the case of incremental optimization. As in sparse matrix format long as the error decreases when adding nodes, a decreases w We also evaluated a dense Cholesky solver, but both the and the system stays in the Newton-Euler region. When a computational and the memory requirements were several link is added that causes a large distortion that does not get orders of magnitude larger than the other approaches. As an corrected, A can rise and the system goes back to the more example, for a dataset with 1600 constraints and 800 nodes robust gradient descent one iteration using a dense Cholesky solver would take 2.1 V. SCAN MATCHING seconds while the other approaches require an average of a few milliseconds. All experiments are executed on an Intel SPA requires precision (inverse covariance) estimates from Core i7-920 running at 2.67 hz. matching of laser scans (or other sensors). Several scan-match In the following, we report the cumulative analysis of algorithms can provide this, for example, Gutmann et al the behavior of the approaches under different operating [11 use point matches to lines extracted in the reference conditions: results for all datasets are available online at scan and return a gaussian estimate of error more recently www.ros.org/research/2010/spa.Wetestedeach the correlation method of Konolige and Chou [17], extended method both in batch mode and on -line. in batch mode. we y Olson [22 provides an efficient method for finding the globally best match within a given range, while returning IInformationonKartocanbefoundatwww.kartorobotics.com ,2 PCO 个2 Fig. 4. Batch optimization on real-world datasets, using Odometry and Spanning-Tree initialization. Left: the final x error per constraint for all the approaches. Right: the time required to converge. Every point in the plots represents one dataset. The datasets are sorted according to the number of constraints in the graph, which effectively measures the complexity of the optimization. The curves show the interpolated behavior of each method, for better visualization. Notc that pcg and spa converge to the samc crror For each dataset and each optimizer we computed the initial guesses described above. Every optimizer was run for a minimum number of iterations or until a termination criterion was met. We measured the time required to converge and the Fig 3. Effect of the x reduction. This figure shows two maps generated from two graphs having a different x error. The error of the graph associated x error for each approach. The results are summarized in to the top map is 10 times bigger than the one on the bottom. Whereas the Figure 4 for the Odometry and Spanning-Tree initializations overall structure appears consistent in both cases, the details in the map with For these datasets there was no substantial difference in the lower x appear significantly more consistent performance between the two types of initialization provided the algorithm with the full graph while in on-line In the error graph, PCG and SPa converged to almost mode we carried out a certain number of iterations whenever a new node was added to the graph. In the remainder of linear solver. They both dominate TORo, which has more than this section we first discuss the off-line experiments, then we 10 times the error for the larger graphs. We attribute this to present the on-line experiments. We conclude by analyzing all the inability of TORO to handle non-spherical covariances, methods on a large-scale simulated dataset and its very slow convergence properties. SPA requires almost an order of magnitude less computational effort than pcg or A. Accuracy Measure TORO for almost all graphs. For these indoor datasets there is no ground truth. Instead TORO was designed to be robust to bad initializations and the measure of goodness for a pose-constraint system is the to test this we also ran all methods with all nodes initialized covariance-weighted squared error of the constraints, or to(0,0,0). In this case, SPA and PCG converged to non-global error. If the scan matcher is accurate, lower x2 means that minima for all datasets, while toro was able to reconstruct scans align better. Figure 3 shows this effect on a real-world the correct topology dataset C. Real-World Experiments: On-Line Optimization B. Real-World Experiments: Off- Line Optimization For the on-line comparison, we incrementally augment the To optimize a dataset off-line, we provide each optimizer graph by adding one node and by connecting the newly added with a full description of the problem. We leave out from node to the previously existing graph. We invoke the optimizer the comparison DSIF and TreeMap, since they only operate after inserting each node, and in this way simulate its behavior incrementally dsiF is equivalent to one iteration of SpA in when executed in conjunction with a SLAM front-end.The batch mode). Since the success of the off-line optimization optimization is carried out for a maximum number iterations strongly depends on the initial guess, we also investigated two or until the error does not decrease. The maximum number of initialization strategies described below iterations for SPA/PCG is 1; for TreeMap, 3; and for TORO Odometry: the nodes in the graph are initialized with 100. Since PCg iteratively solves the linear subproblem, we incremental constraints. This is a standard approach taken limited it to 50 iterations there. These thresholds were selected in almost all graph optimization algorithms to obtain the best performances in terms of error reduction. In Spanning-Tree: A spanning tree is constructed on the Figure 5 we report the statistics on the execution time and on graph using a breadth-first visit. The root of the tree the error per constraint every time a new constraint was added is the first node in the graph. The positions of the In terms of convergence, SPA/PCG dominate the other nodes are initialized according to a depth-first visit of the methods. This is not surprising in the case of dsi, which is spanning tree. The position of a child is set to the position an information filter and therefore will be subject to lineariza of the parent transformed according to the connecting tion errors when closing large loops. TORO has the closest constraint. In our experiments, this approach gives the performance to SPA, but suffers from very slow convergence best results per iteration, characteristic of gradient methods; it also does 子一 ESIE Fig. 5. On-line optimization on real-world datasets. Lett: the average x. error Fig. 6. Large simulated dataset containing 100k nodes and 400k constraints per constraint after adding a node during incremental operation. Right: the used in our experiments. Left: initial guess computed from the odometr average time required to optimize the graph after inserting a node. Each data Right: optimized graph. Our approach requires about 10 seconds to perform point represents one data set, the a-axis shows the total number of constraints a full optimization of the graph when using the spanning-tree as initial guess of that data set note that the error for pcg and spa is the same in the left graph 11000 not handle non-circular covariances which limit its ability to achieve a minimal x. Treemap is much harder to analyze since it has a complicated strategy setting up a tree structure for optimization For these datasets, it appears to have a large tree (large dataset loops) with small leaves. The tree structure is optimized by fixing linearizations and deleting connections which leads to fast computation but poor convergence, with O00 x almost 3 orders of magnitude worse than SPA 1020304050 All the methods are approximately linear overall in the size of the constraint graph, which implies that the number of large Fig. 7. Evolution of the x2 error during balch optimization of a large loop closures grows slowly. Treemap has the best performance simulated dataset consisting of 100,000 nodes and 400,000 constraints,under odometry and spanning-tree initialization. The left plot shows the overal over all datasets, followed by SPA and DSIF. Note that SPA execution, while the right plot shows a magnified view of me ?2 error of is extremely regular in its behavior: there is little deviation SPA and PCG close to the convergence point from the linear progression on any dataset. Furthermore that average and max times are the same: see the graph in Figure 8. initialization is TORO; SPA/PCG essentially does not converge Finally, TORO and PCG use more time per iteration, with pcg to a global minimum under odometry or zero starts. SPA/PCG about four times that of SPA. Given SPA's fast convergence, converges globally from the spanning-tree initialization after we could achieve even lower computational effort by applying 10 seconds or So, with SPa being significantly faster at the it only every n node additions. We emphasize that these graphs convergence point (see magnified view in Figure 7). TORO were the largest indoor datasets we could find, and they are has good initial convergence, but has a long tail because of not challenging for SPA gradient descent. b)On-Line Optimization: we processed the dataset in D. Synthetic Dataset crementally, as described in Section VI-C. In Figure 8 we To estimate the asymptotic behavior of the algorithms we report the evolution of the x error and time per added node generated a large simulated dataset. The robot moves on a grid; Both SPA and TreeMap converge to a minimum x(see cell of the grid has a side of 5 meters, and we create a Figure 7 for the converged map). However, their computational node every meter. The perception range of the robot is 1.5 behavior is very different: Tree Map can use up to 100 seconds meters. Both the motion of the robot and the measurements per iteration, while spa grows slowly with the size of the are corrupted by a zero mean gaussian noise with standard graph. Because of re-visiting in the dataset, TreeMap has a deviation ou= diag(0.01 m, 0.01 m, 0.5 deg). Whenever a small tree with very large leaves, and perform LM optimization robot is in the proximity of a position it has visited, we at each leaf, leading to low error and high computation generate a new constraint. The simulated area has spans over The other methods have computation equivalent to SPA 500 x 500 meters, and the overall trajectory is 100 kIm long. but do not converge as well. Again DSIF performs poorly, and This results in frequent re-observations. The full graph is does not converge. TORO converges but as usual has difficulty shown in Figure 6. This is an extremely challenging dataset, with cleaning up small errors. PCG spikes because it does not and much worse than any real-wor ld dataset. In the following fully solve the linear subproblem, eventually leading to higher we report the result of batch and on-line execution of all the overall error. approaches we compared a)Off-Line Optimization. Each batch approach was ex ⅤII. CONCLUSIONS ecuted with the three initializations described in the previous In this paper, we presented and experimentally validated a section: odometry, spanning-tree, and zero. Results are shown nonlinear optimization system called Sparse Pose Adjustment in Figure 7 as a function of time. The only approach which (SPA) for 2D pose graphs. SPA relies on efficient linear matrix is able to optimize the graph froin a zero or odometry construction and sparse non-iterative Cholesky decomposition October 2008 ORO [2] F. F. Campos and J. S. Rollett. Analysis of preconditioners for conjugate Reema gradients through distribution of eigenvalues. Int. J. of Computer Mathematics,58(3):135-158,1995 [3 T. A. Davis. Direct Methods for Sparse Linear Systems(Fundamentals of Algorithms 2 ) Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2006 [4 F. DelliaerL. Square ROOL SAM. In Proc. of Robotics: Science and Systems(RSS), pages 177-184, Cambridge, MA, USA, 2005 [5]J Dongarra, A Lumsdaine, R PoZo, and K. Remington. A sparse matrix library in C++ for high performance architectures. In Object Oriented Numerics Conference, pages 214-218, 1994 [6] T. Duckett, S. Marsland, and J. Shapiro. Fast, on-line learning of globally consistent maps. Journal of Autonomous Robots, 12(3): 287-300, 2002 [7 R. M. Eustice, H Singh, and JJ. Leonard. Exactly sparse delayed-state f constraints filters for vicw-bascd SLAM. IEEE Trans. Robotics, 22(6). 2006 [8] U. Frese. Treemap: An o(logn)algorithm for indoor simultaneous localization and mapping. Journal of autonomous Robots, 21(2): 103 122.2006 [9 U. Frese, P. Larsson, and T. Duckett. A multilevel relaxation algorithm for simultaneous localisation and mapping. /EEF Transactions on Robotics,21(2):1-12,2005 [10] G. Grisetti, C. Stachniss, and w. Burgard. Non-linear constraint network optimization for efficient map learning. IEEE Transactions on Intelligent Transportation Systems, 10: 428-439, 2009. ISSN: 1524-9050 [11]J -S Gutmann, M. Fukuchi, and K Sabe Environment identification by comparing maps of landmarks. In International Conference on Robotics and Automation. 2003 TORO [12] J.-S. Gutmann and K. Konolige. Incremental mapping of large cyclic PCG environments. In Proc. of the IEEE Int Symposium on Computationa Intelligence in Robotics and Automation(CIRA), pages 318-325, Mon CA USA.999 [13]A. Howard, M. Mataric, and G. Sukhatme. Relaxation on a mesh a formalism for gcncralizcd localization. In Proc. of the IEEE/RSJ Fig. 8. On-line optimization of the large simulated data set. In the top graph Int Conf on Intelligent Robots and Systems(IROS), pages 1055-1060 SPA and TrccMap havc thc samc minimum x crror ovcr all constraints. DSIF 2001 does not converge to a global minimum, while TORo converges slowly and [14] M. Kaess. A. Ranganathan, and F. Dellaert. iSAM: Fast incremental PCG spikes and has trouble with larger graphs. In the bottom figure, SPA is smoothing and mapping with efficient data association. In International clearly superior to TreeMap in commputation timle Conference on Robotics and Automation, Rome, 2007 le map-making. In Proceedings of the National Conference on A/(AAA1),20 to efficiently represent and solve large sparse pose graphs. [16] K. Konolige and J. Bowman. Towards lifelong visual maps. In None of the real-world datasets we could find were challenging International Conference on Intelligent Robots and Systems, pages even in batch mode. The largest map takes sub-second time 1156-1163,2009 to get fully optimized On-line computation is in the range of [17]K Konolige and K Chou. Markov localization using correlation. In Proc. of the Int. Conf on Artificial Intelligence(1.CAl),199y 10 ns/node at worst; unlike ekf filters or other methods that [18] F. Lu and E. Milios. Globally consistent range scan alignment for have poor computational performance, we do not have to split cnvironment mapping. Journal of Autonomous Robots, 4: 333-349, 1997 the map into submaps [23] to get globally minimal error [19] I. Mahon, S. Williams, O. Pizarro, and M. Johnson-Roberson. Efficient view-based SLAM using visual loop closures. IEEE Transactions on Compared to state-of-the-art methods . Spa is faster and RoboticS, 24(5): 1002-1014, Octob converges better. The only exception is in poorly-initialized [20] M. Montemerlo and S. Thrun. Large-scale robotic 3-d mapping of urban tructures. In ISER. 2004 maps, where only the stochastic gradient technique of Toro [21] E. Olson, J. Leonard, and s. Teller. Fast iterative optimization of pose can converge; but by applying a spanning-tree initialization, graphs with poor initial estimates. In Proc. of the IEEE Int. Conf. on SPa can solve even the difficult synthetic example better than Robotics& Automation(ICRA), pages 2262-2269, 2006 TORO. When combined with a scan-matching front end. SPa [22] E. B. Olson. Real-time correlative scan matching. In International Conference on Robotics and Automation, pages 4387-4393, 2009 will enable on-line exploration and map construction. Because [23] L Paz, J Tardos, and J Neira. Divide and conquer: EKF SLAM in it is a pose graph method, SPA allows incremental additions O(n). IEEE Transactions on Robotics, 24(5), October 2008 and deletions to the map, facilitating lifelong mapping [16 [24] S. Thrun and M. Montemerlo. The graph SLAM algorithm with All the relevant code for running SPa and the other 2 applications to large-scale mapping of urban structures. IntJ.Rob Res,25(5-6:403-429,2006. methods we implemented is available online and as open- [25]B. Triggs, P F Mclauchlan, R I Hartley, and A. W. Fitzibbon Bundle source, along with the datasets and simulation generator djustment- a modern synthesis. In Vision Algorithms: Theory and Practice, LNCS, pages 298-375. Springer Verlag, 2000 (www.ros.org/research/2010/spa).Anaccompany ing video shows spa in both online and offline mode on a large real-world dataset REFERENCES [1 M. Agrawal and K. Konolige. FrameSLAM: From bundle adjustment to real-time visual mapping. IEEE Transactions on Robotics, 24(5),


关注 私信 TA的资源