Efficient Sparse Pose Adjustment for 2D Mapping.pdf

Pose graphs have become a popular representation for solving the simultaneous localization and mapping (SLAM) problem. A pose graph is a set of robot poses connected by nonlinear constraints obtained from observations of features common to nearby poses. Optimizing large pose graphs has been a bottle
optimized configuration(updated poses) of accuracy of the estimate and execution time, has a major constraints impact on the overall mapping system sensor data frontend backend this paper we describe in detail an efficient and com (graphconstruction) poses optimization pact optimization approach that operates on 2D graphs. Our algorithm can be coupled with arbitrary frontends that handle Fig. 2. Typical graphbased SLAM system. The frontend and the back end are executed in an interleaved wa different kinds of sensors. For clarity of presentation we shortly describe a frontend for laser data. However. the To summarize the paper: we propose an efficient approach general concepts can be straightforward y applied to different for optimizing 2D pose graphs that uses direct sparse Cholesky sensors decomposition to solve the linear system. The linear system IV. SPARSE POSE ADJUSTMENT is computed in a memoryefficient way that minimizes cache To optimize a set of poses and constraints, we use the well misses and thus significantly improves the performance. We known LevenbergMarquardt (LM) method as a framework, compare our method, in both accuracy and speed, to existing with particular implementations that make it efficient for the LM and nonLM approaches that are avaiable, and show sparse systems encountered in 2D map building. In analogy that SPA outperforms them. Open source implementations are to the Sparse Bundle Adjustment of computer vision, which available both in C++ and in matlab/octa is a similarly efficient implementation of lM for cameras and Efficient direct (noniteralive) algorithms to solve sparse features, we call our system Sparse Pose Adjustment(SPA) systems have become available [3], thus reviving a series of approaches for optimizing the graphs which have been A. Error formulation discarded in the past. In this paper, The variables of the system are the set of global poses c of the robot, parameterized by a translation and angle III. SYSTEM FORMULATION C Bil=[ci,Ji, Bil. A constraint is a measurement Popular approaches to solve the slam problem are the so of one node c from another's(ci) position The measured called"graphbased"or"networkbased"methods. The idea is offset between Ci and Ci, in Cis frame, is Eii, with precision to represent the history of the robot measurements by a graph. matrix Ai;(inverse of covariance ) For any actual poses of ci Every node of the graph represents a sensor measurement and Ci, their offset can be calculated as the measurement was taken An edge between two nodes h(c;,c;) Ri(titi 0; encodes the spatial information arising from the alignment of the connected measurements and can be regarded as a spatial Here Ri is the 2x2 rotation matrix of Oi. h(ci, Cj)is called the constraint between the two nodes measurement eguation In the context of graphbased SLAM, one typically con The error function associated with a constraint. and the total siders two different problems. The first one is to identify the error, are constraints based on sensor data. This socalled data associ ei≡;h(c1,C) ation problem is typically hard due to potential ambiguities or symmetries in the environment. A solution to this problem x2(c,p)=∑cA (2) is often referred to as th the SLAM frontend and it directly Note that the angle parameter in h(ci, ci) is not unique, deals with the sensor data. The second problem is to correct the poses of the robot to obtain a consistent map of the Since adding or subtracting 2 yields the same result. angle environment given the constraints This part of the approach is differences are always normalized to the interval (T, 7 ]when often referred to as the optimizer or the slam backend Its they occur task is to seek for a configuration of the nodes that maximizes B. Linear System the likelihood of the measurements encoded in the constraints The optimal placement of c is found by minimizing the An alternative view to this problem is given by the spring total error in Equation 2. A standard method for solving this mass model in physics. In this view, the nodes are regarded as problem is LevenbergMarquardt (LM), iterating a linearized masses and the constraints as springs connected to the masses. solution around the current values of c. The linear system is The minimal energy configuration of the springs and masses formed by stacking the variables c into a vector x, and the describes a solution to the mapping problem error functions into a vector e. Then we define During its operation a graphbased SLAM system inter leaves the execution of the frontend and of the back end as shown in Figure 2. This is required because the frontend needs to operate on a partially optimized map to restrict the search m about potential constraints. The more accurate the current ae estimate is, the more robust the constraints generated by the frontend will be and the faster its operation dingly the performance of the optimization algorithIn, neasured in terms H≡JAJ he lm system is will depend on the density of the Cholesky factor, which (H+ diag)△x=JAe (4) in turn depends on the structure of H and the order of its variables. Mahon et al. llg have analyzed the behavior of the Here a is a small positive multiplier that transitions between Cholesky factorization as a function of the loop closures in gradient descent and NewtonEuler methods Gradient descent the SLAM system. If the number of loop closures is constant, is more robust and less likely to get stuck in local minima, but then the Cholesky factor density is O(n), and decomposition converges slowly: NewtonEuler has the opposite behavior is O(n). If the number of loop closures grows linearly with the The matrix is formed by adding four components for number of variables, then the Cholesky factor density grows each measurement h(ci, C,) as O(n)and decomposition is O(n) E. Compressed Column storage Ji ijji Each iteration of the lM algorithm has three steps: setting (5)up the linear system, decomposing H, and finding 4. c by AijJ JT JjJ backsubstitution. Setting up the system is linear in the number of constraints(and hence in the number of variables for most graphbased SLAM systems). In many situations it can be the more costly part of the linear solver. Here we outline an Here we have abused the notation for slightly, with Ji efficient method for setting up the sparse matrix form of H being the Jacobian of eij with respect to the variable Ci. from the constraints generaled by Equation(5) The components are all 3x3 blocks. The righthand side is d by adding 3xl blocks C, Aijeij and c, Aijeij for ea CSparse uses compressed column storage(CCs) format for orme sparse matrices. The figure below shows the basic idea constraint Solving the linear equation yields an increment Ax that can 1040 be added back in to the current value of x as follows col ptr 0502 0001 i row ind03 1 30 t=t;+△t (6)6800 val 65842 6;=6;+△b2 (8) C. Error Jacobians Each nonzero entry in the array is placed in the val vector Jacobians of the measurement function h appear in the Entries are ordered by column first, and then by row within normal equations (4), and we list them here the column. col ptr has one entry for each column, plus a last entry which is the number of total nonzeros(nnz). The OR2062:(tt1) col ptr entry for a column points to the start of the column in 00 the row _ind and val variables. Finally, row_ind gives the row de index of each entry within a column 001 00 ccs format is storageefficient but is difficult to create (7) incrementally, since each new nonzero addition to a column causes a shift in all subsequent entries. The most efficient D. Sparsity way would be to create the sparse matrix in columnwise We are interested in large systems, where the number of order, which would require cycling through the constraints poses c can be 10k or more(the largest realworld indoor c times. Instead, we go through the constraints just once dataset we have been able to find is about 3k poses, but we and store each 3x3 block Ji MijI in a special blockoriented can generate synthetic datasets of any order). The number of data structure that parallels the CCs format. The algorithm is system variables is 3 cll, and the H matrix is cl, or over given in Table I In this algorithm, we make a pass through the 108 elements. Manipulating such large matrices is expensive. constraints to store the 3x3 block matrices into C++ std: map Fortunately, for typical scenarios the number of constraints data structures, one for each column. Maps are efficient at grows only linearly with the number of poses, so that H is ordered insertion based on their keys, which is the row index very sparse. We can take advantage of the sparsity to solve Once this data structure is created(step (2)), we use the the linear problem more efficientl ordered nature of the maps to create the sparse ccs format of For solving (4) in sparse format, we use the SParse H by looping over each map in the order of its keys, first package [3]. This package has a highlyoptimized Cholesky to create the column and row indices, and then to put in decomposition solver for sparse linear systems. It employs the values. The reason for separating the column/row creation several strategies to decompose H efficiently, including a from value insertion is because the former only has to be done logical ordering and an approximate minimal degree(AMD) once for any set of iterations of LM algorithm to reorder variables when H is large Note that only the upper triangular elements of h are stored In general the complexity of decomposition will be O(n) since the Cholesky solver in CSparse only looks at this part, in the number of variables. For sparse matrices, the complexity and assumes the matrix is symmetric TABLE I TABLE II ALGORITHM FOR SETTING UP THE SPARSE H MATRIX CONTINUABLE LM ALGORITHM H=CreateSparse(e, Cf) Continuablelm(c,e,入 Inpur: sct of constraints eig, and a list of frcc nodes(variables Outpur: sparse upper triangular H matrix in CCS format Input: nodes c and constraints e. and diagonal increment X Output: updated c I)Initialize a vector of size cf ll of C++ std: map's; each map 1)If X=0, set a to the stored value from the previous run is associated with the corresponding column of H. The key of 2)Set up the sparse H matrix using Create Sparse(e, cco), with the Inap is the row index, and the data is an eimply 3x3 Latrix as the fixed Let map[i, 3) stand 3)Solve(H+AdiagH)Ax=J Ae, using sparse Cholesky 2) For each constraint eij, assuming i<3 with aMd a) In the steps below, create the map entries if they do not 4)Update the variables cco using Equation(6) exist 5) If the error e has decreased, divide a by two and save, and b)If ci is free, Ilap[i, i]+= Ai; Ji return the updated poses Tor cco c)If ci is frcc, map[3, j]+=J; AijJi 6)If the error e has increased, multiply a by two and save, and d)If ci, Ci are free, map[j, i]+=J, Aij; return the original poses for cco 3)Set up the sparse upper triangular Matrix H a)In the steps below, ignore elements of the 3x3 map i, i entries that are bclow thc diagonal an accurate covariance. The method allows either a single b)Go through map in column then row order, and set scan or set of aligned scans to be matched against another up col ptr and row _ind by determining the number of elements in each coluInlll, and their row positions single scan or set of aligned scans. This method is used in c)Go through mapl again in column then row order, and the SRIs mapping system Karto for both local matching of insert entries into val sequentially. sequential scans, and loopclosure matching of sets of scans as in [12]. To generate the realworld datasets for experiments, F. Continuable LM System we ran Karto on 63 stored robot logs of various sizes, using its scanmatching and optimizer to build a map and generate The Lm system algorithm is detailed in Table II. It does constraints, including loop closures. the graphs were saved one step in the Lm algorithm, for a set of nodes c with and used as input to all methods in the experiments associated measurements. Running a single iteration allows for incremental operation of LM, so that more nodes can b VI.EⅩ PERIMENTS added between iterations. The algorithm is continuable in that In this section, we present experiments where we compare a is saved between iterations, so that successive iterations Spa with stateofthe art approaches on 63 real world datasets can change X based on their results. The idea is that adding and on a large simulated dataset. We considered a broad variety a few nodes and measurements doesnt change the system of approaches, including the best stateoftheart hat much. so the value of a has information about the state Information filter: DSIF [7I of gradient descent VS. EulerNewton methods. When a loop Stochastic gradient descent: TORO [10 closure occurs, the system can have trouble finding a good Decomposed nonlinear system: Treemap [8] minima. and a will tend to rise over the next few iterations to Sparse pose adjustment: SPA, with (a) sparse direct start. the system down a good path Cholesky solver, and (b) iterative PCG [15 There are many ditferent ways of adjusting A; we choose a We updated the PCG implementation to use the same"con simple one. The system starts with a small lambda, 104. If tinued LM method as spa; the only difference is in the the updated system has a lower error than the original, X is underlying linear solver. The preconditioner is an incomplete halved. If the error is the same or larger, d is doubled. This Cholesky method, and the conjugate gradient is implemented works quite well in the case of incremental optimization. As in sparse matrix format long as the error decreases when adding nodes, a decreases w We also evaluated a dense Cholesky solver, but both the and the system stays in the NewtonEuler region. When a computational and the memory requirements were several link is added that causes a large distortion that does not get orders of magnitude larger than the other approaches. As an corrected, A can rise and the system goes back to the more example, for a dataset with 1600 constraints and 800 nodes robust gradient descent one iteration using a dense Cholesky solver would take 2.1 V. SCAN MATCHING seconds while the other approaches require an average of a few milliseconds. All experiments are executed on an Intel SPA requires precision (inverse covariance) estimates from Core i7920 running at 2.67 hz. matching of laser scans (or other sensors). Several scanmatch In the following, we report the cumulative analysis of algorithms can provide this, for example, Gutmann et al the behavior of the approaches under different operating [11 use point matches to lines extracted in the reference conditions: results for all datasets are available online at scan and return a gaussian estimate of error more recently www.ros.org/research/2010/spa.Wetestedeach the correlation method of Konolige and Chou [17], extended method both in batch mode and on line. in batch mode. we y Olson [22 provides an efficient method for finding the globally best match within a given range, while returning IInformationonKartocanbefoundatwww.kartorobotics.com ,2 PCO 个2 Fig. 4. Batch optimization on realworld datasets, using Odometry and SpanningTree initialization. Left: the final x error per constraint for all the approaches. Right: the time required to converge. Every point in the plots represents one dataset. The datasets are sorted according to the number of constraints in the graph, which effectively measures the complexity of the optimization. The curves show the interpolated behavior of each method, for better visualization. Notc that pcg and spa converge to the samc crror For each dataset and each optimizer we computed the initial guesses described above. Every optimizer was run for a minimum number of iterations or until a termination criterion was met. We measured the time required to converge and the Fig 3. Effect of the x reduction. This figure shows two maps generated from two graphs having a different x error. The error of the graph associated x error for each approach. The results are summarized in to the top map is 10 times bigger than the one on the bottom. Whereas the Figure 4 for the Odometry and SpanningTree initializations overall structure appears consistent in both cases, the details in the map with For these datasets there was no substantial difference in the lower x appear significantly more consistent performance between the two types of initialization provided the algorithm with the full graph while in online In the error graph, PCG and SPa converged to almost mode we carried out a certain number of iterations whenever a new node was added to the graph. In the remainder of linear solver. They both dominate TORo, which has more than this section we first discuss the offline experiments, then we 10 times the error for the larger graphs. We attribute this to present the online experiments. We conclude by analyzing all the inability of TORO to handle nonspherical covariances, methods on a largescale simulated dataset and its very slow convergence properties. SPA requires almost an order of magnitude less computational effort than pcg or A. Accuracy Measure TORO for almost all graphs. For these indoor datasets there is no ground truth. Instead TORO was designed to be robust to bad initializations and the measure of goodness for a poseconstraint system is the to test this we also ran all methods with all nodes initialized covarianceweighted squared error of the constraints, or to(0,0,0). In this case, SPA and PCG converged to nonglobal error. If the scan matcher is accurate, lower x2 means that minima for all datasets, while toro was able to reconstruct scans align better. Figure 3 shows this effect on a realworld the correct topology dataset C. RealWorld Experiments: OnLine Optimization B. RealWorld Experiments: Off Line Optimization For the online comparison, we incrementally augment the To optimize a dataset offline, we provide each optimizer graph by adding one node and by connecting the newly added with a full description of the problem. We leave out from node to the previously existing graph. We invoke the optimizer the comparison DSIF and TreeMap, since they only operate after inserting each node, and in this way simulate its behavior incrementally dsiF is equivalent to one iteration of SpA in when executed in conjunction with a SLAM frontend.The batch mode). Since the success of the offline optimization optimization is carried out for a maximum number iterations strongly depends on the initial guess, we also investigated two or until the error does not decrease. The maximum number of initialization strategies described below iterations for SPA/PCG is 1; for TreeMap, 3; and for TORO Odometry: the nodes in the graph are initialized with 100. Since PCg iteratively solves the linear subproblem, we incremental constraints. This is a standard approach taken limited it to 50 iterations there. These thresholds were selected in almost all graph optimization algorithms to obtain the best performances in terms of error reduction. In SpanningTree: A spanning tree is constructed on the Figure 5 we report the statistics on the execution time and on graph using a breadthfirst visit. The root of the tree the error per constraint every time a new constraint was added is the first node in the graph. The positions of the In terms of convergence, SPA/PCG dominate the other nodes are initialized according to a depthfirst visit of the methods. This is not surprising in the case of dsi, which is spanning tree. The position of a child is set to the position an information filter and therefore will be subject to lineariza of the parent transformed according to the connecting tion errors when closing large loops. TORO has the closest constraint. In our experiments, this approach gives the performance to SPA, but suffers from very slow convergence best results per iteration, characteristic of gradient methods; it also does 子一 ESIE Fig. 5. Online optimization on realworld datasets. Lett: the average x. error Fig. 6. Large simulated dataset containing 100k nodes and 400k constraints per constraint after adding a node during incremental operation. Right: the used in our experiments. Left: initial guess computed from the odometr average time required to optimize the graph after inserting a node. Each data Right: optimized graph. Our approach requires about 10 seconds to perform point represents one data set, the aaxis shows the total number of constraints a full optimization of the graph when using the spanningtree as initial guess of that data set note that the error for pcg and spa is the same in the left graph 11000 not handle noncircular covariances which limit its ability to achieve a minimal x. Treemap is much harder to analyze since it has a complicated strategy setting up a tree structure for optimization For these datasets, it appears to have a large tree (large dataset loops) with small leaves. The tree structure is optimized by fixing linearizations and deleting connections which leads to fast computation but poor convergence, with O00 x almost 3 orders of magnitude worse than SPA 1020304050 All the methods are approximately linear overall in the size of the constraint graph, which implies that the number of large Fig. 7. Evolution of the x2 error during balch optimization of a large loop closures grows slowly. Treemap has the best performance simulated dataset consisting of 100,000 nodes and 400,000 constraints,under odometry and spanningtree initialization. The left plot shows the overal over all datasets, followed by SPA and DSIF. Note that SPA execution, while the right plot shows a magnified view of me ?2 error of is extremely regular in its behavior: there is little deviation SPA and PCG close to the convergence point from the linear progression on any dataset. Furthermore that average and max times are the same: see the graph in Figure 8. initialization is TORO; SPA/PCG essentially does not converge Finally, TORO and PCG use more time per iteration, with pcg to a global minimum under odometry or zero starts. SPA/PCG about four times that of SPA. Given SPA's fast convergence, converges globally from the spanningtree initialization after we could achieve even lower computational effort by applying 10 seconds or So, with SPa being significantly faster at the it only every n node additions. We emphasize that these graphs convergence point (see magnified view in Figure 7). TORO were the largest indoor datasets we could find, and they are has good initial convergence, but has a long tail because of not challenging for SPA gradient descent. b)OnLine Optimization: we processed the dataset in D. Synthetic Dataset crementally, as described in Section VIC. In Figure 8 we To estimate the asymptotic behavior of the algorithms we report the evolution of the x error and time per added node generated a large simulated dataset. The robot moves on a grid; Both SPA and TreeMap converge to a minimum x(see cell of the grid has a side of 5 meters, and we create a Figure 7 for the converged map). However, their computational node every meter. The perception range of the robot is 1.5 behavior is very different: Tree Map can use up to 100 seconds meters. Both the motion of the robot and the measurements per iteration, while spa grows slowly with the size of the are corrupted by a zero mean gaussian noise with standard graph. Because of revisiting in the dataset, TreeMap has a deviation ou= diag(0.01 m, 0.01 m, 0.5 deg). Whenever a small tree with very large leaves, and perform LM optimization robot is in the proximity of a position it has visited, we at each leaf, leading to low error and high computation generate a new constraint. The simulated area has spans over The other methods have computation equivalent to SPA 500 x 500 meters, and the overall trajectory is 100 kIm long. but do not converge as well. Again DSIF performs poorly, and This results in frequent reobservations. The full graph is does not converge. TORO converges but as usual has difficulty shown in Figure 6. This is an extremely challenging dataset, with cleaning up small errors. PCG spikes because it does not and much worse than any realwor ld dataset. In the following fully solve the linear subproblem, eventually leading to higher we report the result of batch and online execution of all the overall error. approaches we compared a)OffLine Optimization. Each batch approach was ex ⅤII. CONCLUSIONS ecuted with the three initializations described in the previous In this paper, we presented and experimentally validated a section: odometry, spanningtree, and zero. Results are shown nonlinear optimization system called Sparse Pose Adjustment in Figure 7 as a function of time. The only approach which (SPA) for 2D pose graphs. SPA relies on efficient linear matrix is able to optimize the graph froin a zero or odometry construction and sparse noniterative Cholesky decomposition October 2008 ORO [2] F. F. Campos and J. S. Rollett. Analysis of preconditioners for conjugate Reema gradients through distribution of eigenvalues. Int. J. of Computer Mathematics,58(3):135158,1995 [3 T. A. Davis. Direct Methods for Sparse Linear Systems(Fundamentals of Algorithms 2 ) Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2006 [4 F. DelliaerL. Square ROOL SAM. In Proc. of Robotics: Science and Systems(RSS), pages 177184, Cambridge, MA, USA, 2005 [5]J Dongarra, A Lumsdaine, R PoZo, and K. Remington. A sparse matrix library in C++ for high performance architectures. In Object Oriented Numerics Conference, pages 214218, 1994 [6] T. Duckett, S. Marsland, and J. Shapiro. Fast, online learning of globally consistent maps. Journal of Autonomous Robots, 12(3): 287300, 2002 [7 R. M. Eustice, H Singh, and JJ. Leonard. Exactly sparse delayedstate f constraints filters for vicwbascd SLAM. IEEE Trans. Robotics, 22(6). 2006 [8] U. Frese. Treemap: An o(logn)algorithm for indoor simultaneous localization and mapping. Journal of autonomous Robots, 21(2): 103 122.2006 [9 U. Frese, P. Larsson, and T. Duckett. A multilevel relaxation algorithm for simultaneous localisation and mapping. /EEF Transactions on Robotics,21(2):112,2005 [10] G. Grisetti, C. Stachniss, and w. Burgard. Nonlinear constraint network optimization for efficient map learning. IEEE Transactions on Intelligent Transportation Systems, 10: 428439, 2009. ISSN: 15249050 [11]J S Gutmann, M. Fukuchi, and K Sabe Environment identification by comparing maps of landmarks. In International Conference on Robotics and Automation. 2003 TORO [12] J.S. Gutmann and K. Konolige. Incremental mapping of large cyclic PCG environments. In Proc. of the IEEE Int Symposium on Computationa Intelligence in Robotics and Automation(CIRA), pages 318325, Mon CA USA.999 [13]A. Howard, M. Mataric, and G. Sukhatme. Relaxation on a mesh a formalism for gcncralizcd localization. In Proc. of the IEEE/RSJ Fig. 8. Online optimization of the large simulated data set. In the top graph Int Conf on Intelligent Robots and Systems(IROS), pages 10551060 SPA and TrccMap havc thc samc minimum x crror ovcr all constraints. DSIF 2001 does not converge to a global minimum, while TORo converges slowly and [14] M. Kaess. A. Ranganathan, and F. Dellaert. iSAM: Fast incremental PCG spikes and has trouble with larger graphs. In the bottom figure, SPA is smoothing and mapping with efficient data association. In International clearly superior to TreeMap in commputation timle Conference on Robotics and Automation, Rome, 2007 le mapmaking. In Proceedings of the National Conference on A/(AAA1),20 to efficiently represent and solve large sparse pose graphs. [16] K. Konolige and J. Bowman. Towards lifelong visual maps. In None of the realworld datasets we could find were challenging International Conference on Intelligent Robots and Systems, pages even in batch mode. The largest map takes subsecond time 11561163,2009 to get fully optimized Online computation is in the range of [17]K Konolige and K Chou. Markov localization using correlation. In Proc. of the Int. Conf on Artificial Intelligence(1.CAl),199y 10 ns/node at worst; unlike ekf filters or other methods that [18] F. Lu and E. Milios. Globally consistent range scan alignment for have poor computational performance, we do not have to split cnvironment mapping. Journal of Autonomous Robots, 4: 333349, 1997 the map into submaps [23] to get globally minimal error [19] I. Mahon, S. Williams, O. Pizarro, and M. JohnsonRoberson. Efficient viewbased SLAM using visual loop closures. IEEE Transactions on Compared to stateoftheart methods . Spa is faster and RoboticS, 24(5): 10021014, Octob converges better. The only exception is in poorlyinitialized [20] M. Montemerlo and S. Thrun. Largescale robotic 3d mapping of urban tructures. In ISER. 2004 maps, where only the stochastic gradient technique of Toro [21] E. Olson, J. Leonard, and s. Teller. Fast iterative optimization of pose can converge; but by applying a spanningtree initialization, graphs with poor initial estimates. In Proc. of the IEEE Int. Conf. on SPa can solve even the difficult synthetic example better than Robotics& Automation(ICRA), pages 22622269, 2006 TORO. When combined with a scanmatching front end. SPa [22] E. B. Olson. Realtime correlative scan matching. In International Conference on Robotics and Automation, pages 43874393, 2009 will enable online exploration and map construction. Because [23] L Paz, J Tardos, and J Neira. Divide and conquer: EKF SLAM in it is a pose graph method, SPA allows incremental additions O(n). IEEE Transactions on Robotics, 24(5), October 2008 and deletions to the map, facilitating lifelong mapping [16 [24] S. Thrun and M. Montemerlo. The graph SLAM algorithm with All the relevant code for running SPa and the other 2 applications to largescale mapping of urban structures. IntJ.Rob Res,25(56:403429,2006. methods we implemented is available online and as open [25]B. Triggs, P F Mclauchlan, R I Hartley, and A. W. Fitzibbon Bundle source, along with the datasets and simulation generator djustment a modern synthesis. In Vision Algorithms: Theory and Practice, LNCS, pages 298375. Springer Verlag, 2000 (www.ros.org/research/2010/spa).Anaccompany ing video shows spa in both online and offline mode on a large realworld dataset REFERENCES [1 M. Agrawal and K. Konolige. FrameSLAM: From bundle adjustment to realtime visual mapping. IEEE Transactions on Robotics, 24(5),
 1.47MB
Efficient_Sparse_Pose_Adjustment_for_2D_mapping.pdf
20200228SPA思想在二维建图中的具体应用和理论阐述。SPA思想在二维建图中的具体应用和理论阐述。SPA思想在二维建图中的具体应用和理论阐述。SPA思想在二维建图中的具体应用和理论阐述。
 Efficient Sparse Pose Adjustment（SPA） for 2D Mapping 翻译和总结 59720200524Konolige K , Grisetti G , Rainer Kümmerle, et al. Efficient sparse pose adjustment for 2D mapping[C]// 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2010. 这是一篇讲解2D建图中的高效稀疏矩阵图优化的一篇文章，谷歌的cartographer激光slam中的位姿图优化主要便是基于思想来实现的
 cartographer三部曲（二）：Efficient Sparse Pose Adjustment for 2D Mapping 16020200527前言：前端多技巧，后端多理论是slam的一个特性。因此要理解本论文需要一定的理论基础。即需要对加权LM算法十分熟悉，对矩阵求导、SO（3）上的导数、稀疏线性方程组求解等理论非常熟悉，还有slam问题中图结构的构造。 总体评价：创新点不大，将视觉中的BA问题直接类比到了激光中，使用了一些工程技巧提高了算法性能。 主要工作：采用和视觉BA问题类似的方法构建图结构进行优化，利用矩阵的稀疏结构对算法进行加速。 优势： 考虑约束中的协方差信息使结构更加精确 SPA对于初始值不敏感，只有非常小的概率陷入局部最优 收敛
 2Dslam 激光slam: 开源代码的比较HectorSLAM Gmapping KartoSLAM CoreSLAM LagoSLAM 24869201508102Dslam 激光slam: 开源代码的比较HectorSLAM Gmapping KartoSLAM CoreSLAM LagoSLAM 最近找到一篇论文比较了一下 目前ros下2D激光slam的开源代码效果比较: 详细参见论文: An evaluation of 2D SLAM techniques available in robot operating system 1. 算法介绍 A . HectorSLAM scanmatching(GaussianNewton equati

下载
基于TA31136的电台接收模块的设计
基于TA31136的电台接收模块的设计

下载
C#数值计算算法编程.zip
C#数值计算算法编程.zip

学院
OMNet++入门到精通
OMNet++入门到精通

学院
AI视觉应用工程师系统直播课2
AI视觉应用工程师系统直播课2

下载
PDF 在线预览类库：Aspose.Pdf.dll
PDF 在线预览类库：Aspose.Pdf.dll

博客
httpclient对象请求时报错javax.net.ssl.SSLException: hostname in certificate didn‘t match
httpclient对象请求时报错javax.net.ssl.SSLException: hostname in certificate didn‘t match

博客
如何在Docker上安装MySQL读写分离
如何在Docker上安装MySQL读写分离

博客
Python基础入门练习——Task05字典、集合、序列
Python基础入门练习——Task05字典、集合、序列

博客
less基础
less基础

博客
SpringBoot实现阿里短信SDK发送短信,使用MQ监听器
SpringBoot实现阿里短信SDK发送短信,使用MQ监听器

博客
数据库本地事务
数据库本地事务

博客
Python基础学习笔记目录
Python基础学习笔记目录

下载
基于EDA技术的数字密码锁设计
基于EDA技术的数字密码锁设计

学院
MFC一站式终极全套课程包
MFC一站式终极全套课程包

下载
远程医疗监控系统程序设计
远程医疗监控系统程序设计

学院
信息系统项目管理师通关教程第3阶段选择题2018上
信息系统项目管理师通关教程第3阶段选择题2018上

学院
AI前瞻大师课第七场卢建晖：创建第一个AI虚拟主播
AI前瞻大师课第七场卢建晖：创建第一个AI虚拟主播

博客
leetcode每日一题 使用golang和java实现 魔术索引
leetcode每日一题 使用golang和java实现 魔术索引

学院
laravel5.6 初级入门
laravel5.6 初级入门

下载
双相电源模块散热性能最大化的多层PCB布局设计
双相电源模块散热性能最大化的多层PCB布局设计

学院
第二阶段4.5：深入浅出Spring Boot的核心原理
第二阶段4.5：深入浅出Spring Boot的核心原理

学院
仿生环境（Java）
仿生环境（Java）

博客
Mybatis基础（part 1）
Mybatis基础（part 1）

学院
献给学习拖延者的Python入门课
献给学习拖延者的Python入门课

下载
LED使用过程中辐射损失分析
LED使用过程中辐射损失分析

博客
ShiroJSP标签（九）
ShiroJSP标签（九）

学院
SoapUI接口测试断言高阶敏捷实战
SoapUI接口测试断言高阶敏捷实战

博客
动态规划：力扣152. 乘积最大子数组
动态规划：力扣152. 乘积最大子数组

下载
基于嵌入式微处理器的轮胎压力监控系统的设计
基于嵌入式微处理器的轮胎压力监控系统的设计

下载
XMLHttpRequest中文参考手册.pdf
XMLHttpRequest中文参考手册.pdf