Robust Visual Inertial Odometry Using a Direct EKF-Based Approach.pdf

所需积分/C币:50 2017-11-06 11:15:13 1.43MB PDF
收藏 收藏
举报

Abstract— In this paper, we present a monocular visual-inertial odometry algorithm which, by directly using pixel intensity errors of image patches, achieves accurate tracking performance while exhibiting a very high level of robustness. After detection, the tracking of the multilevel patch features
marginalize them out using a nullspace decomposition, thus g: attitude of iMu (map from to D) leading to a small filter state size bf: additive bias on accelerometer(expressed in B), In the present paper we propose a visual-incrtial odom-. buv: additive bias on gyroscope(expressed in b) etry framework which combines and extends several of the C: translational part of IMU-camera extrinsic (ex- above mentioned approaches. While targeting a simple and resse ed in 3), consistent approach and avoiding ad-hoc solutions, we adapt z:rotational part of IMU-camera extrinsic(map from the structure of the standard visual-inertial eKF-slam for B to 2), mulation [14],[12]. The following keypoints are integrated u, bearing vector to feature i(expressed in v) into the proposed framework Pi: distance parameter of feature Point features are parametrized by a bearing vector and The generic parametrization for the distance di of a feature i a distance parameter with respect to the current frame. a is given by the mapping di d(pi)(with derivative d'(piD) suitable o-Algebra is used for deriving the correspond- In the context of this work we mainly tested the inverse ing dynamics and performing filtering operations distance parametrization, d(pi) =1/pi. The investigation of Multilevel patch features are directly tracked within the further parametrization will be part of future work EKE, whereby the intensity errors are used as innovation Rotations(q, z E SO( 3)and unit vectors(u E S)are terms during the update ste parametrized by following the approach of Hertzberg et al A QR-decomposition is employed in order to reduce the [10]. This is required in order to perform operations like high dimensional error terms and thus keep the Kalman computing differences or derivatives as well as represent update computationally tractable ing the uncertainty of the state in a minimal manner. For a purely robocentric representation of the full filter parametrizing unit vectors we employ rotations as underly state is employed. The camera extrinsic as well as the ing representation, whereby we define a H-operator which additive imu biases are also co-estimated returns a difference between two unit vectors within a 2D Together this yields a fully robocentric and direct monocular linear subspace. The advantage of this parametrization is that visual-inertial odometry framework which can be run real- the tangent space can be easily computed( which is used for time on a single standard CPU core. In several experiments defining the B-operator) on real data we show its reliable and accurate tracking By using the combined bearing vector and distance pa performance while exhibiting a high robustness against fast rameterization, features can be tialized in an undelayed motions and various disturbances. The framework is imple- manner, i.e., the features are integrated into the filter at mented in c++ and is available as open-source software [2]. detection. The distance of a feature is initialized with a fixed value or, if sufficiently converged, with an estimate II. FILTER SETUP of the current average scene distance. The corresponding A. Overall Filter Structure and State parametrization covariance is set to a very large value. In comparison to The overall structure of the filter is derived from the one other parameterizations we do not over-parametrize the 3D employed in [14,[12]: The inertial measurements are used feature location estimates, whereby each feature corresponds to propagate the state of the filter while the visual informa to 3 columns in the covariance matrix of the state(2 for the tion is taken into account during the filter update steps bearing vector and 1 for the distance parameter). This also a fundamental difference we make use of a fully robocentric avoids the need for re-parameterization [221 representation of the filter state which can be seen as an B. State Propagation adaptation of [4](which is vision-only). One advantage of Based on the proper acceleration measurement, f, and this formulation is that problems with unobservable states the rotational rate measurement. W. the eyaluation of the can inheren y be avoided and thus the consistency of the imu driven state propagation results in the following set of estimates can be improved. On the other hand noise from the continuous differential equations(the superscript x denotes gyroscope will affect all states that need to be rotated during the skew symmetric matrix of a vector) he state propagation(see section II-B). However, since the gyroscope noise is relatively small and because most states 7=-"T十十Uy, (2) are observable this does not represent a significant issue 0 ×v+f+q-1(g (3) Three different coordinate frames are used throughout (4) the paper: the inertial world coordinate frame,I, the IMU fixed coordinate frame. B. as well as the camera fixed bf =wbfs (5) coordinate frame, 2. For tracking N visual features, we use b,=wb. (6) the following filter state: :-(r,v,q,bf,b,c,z,p40,…,μN,p0,…,pN),(1)元 U z, (8) with N(;)v n(Ai uv+wp. r: robocentric position of MMU (expressed in B), v: robocentric velocity of IMU (expressed in B) p420y/d(p2)+ ,2 (10) where N(u) linearly projects a 3D vector onto the 2D In order to account for moving objects or other di tangent space around the bearing vector p, with the bias turbances, a simple mahalanobis based outlier detection corrected and noise affected imu measurements is implemented within the update step. It compares the obtained innovation with the predicted innovation covariance f and rejects the measurement whenever the weighted norm b-1 (12) exceeds a certain threshold. This method inherently takes and with the camera linear velocity and rotational rate into account the covariance of the state and measurements For instance it also considers the image gradients and thereb 1=z(v+dc), (13) tends to reject gradient-less image patches easier 0y=2(c) (14) III. MULTILEVEL PATCH FEATURE HANDLING Furthermore, g is the gravity vector expressed in the world Along the lines of other visual-inertial EKF approaches coordinate frame, and the terms of the form w* are white ( [14], [12])we fully integrate visual features into the state Gaussian noise processes. The corresponding covariance of the Kalman filter (see also section II-A). Within the parameters can either be taken from the IMU specifications prediction step the new locations of the multilevel patch or have to be tuned manually. USing an appropriate Euler features are estimated by considering the IMU-driven motion forward integration scheme, i.e., using the B-operator where model (eq.(9). Especially if the calibration of the extrinsic appropriate, the above time continuous equation can be and the feature distance parameters have converged, this transformed into a set of discrete prediction equations which yields high quality predictions for the feature locations re used during the prediction of the filter state [10] Please note that the derivatives of bearing vectors and can be easily computed and the computational efionon Additionally, the covariance of the predicted pixel location rotations lie within 2D and 3D vector spaces, respectively. possible pre-alignment strategy can be adapted accordingly This is required for achieving a minimal and consistent The subsequent update step computes an innovation term representation of the filter state and covariance by evaluating the discrepancy between the projection of the C. Filter Update multilevel patch into the image frame and the image itself Considering the cross-correlation between the states the eKF For every captured image we perform a state update. We spreads the resulting corrections throughout the filter state assume that we know the intrinsic calibration of the camera In the following the different steps and algorithms involving and can therefore compute the projection of a bearing p to feature handling are discussed in more details. The overall the corresponding pixel coordinate p T(u). As will be workflow for a single feature is depicted in fig. I described in section ill-B. we derive a 2d linear constraint bi ( T(Ai)), for each feature i which is predicted to be visible Prediction Detect new feature in the current frame with bearing vector A. This linear constraint encompasses the intensity errors associated with a Extract patch if possible specific feature and can be directly employed as innovation term within the kalman update(affected by additive discrete Gaussian pixel intensity noise ni) Warp patch Has good statistic Delete Feature y2=b:(丌(1)+m;, (15) Update ogether with the Jacobian Do pre-alg H;=A(T(14)x,(12) (16) BY Fig. 1. Overview on the workflow of a feature in the filt stac king the above terms for all visible features we can heuristics for adding and removing features are adapted to the total number directly perform a standard EKF update. However, if the of possible features nitial guess for a certain bearing vector li has a large uncertainty the update will potentially fail. This typically occurs if features get newly initialized and exhibit a large A. Structure and Warping distance uncertainty. In order to avoid this issue we improve For a given image pyramid(factor 2 down-sampling) and the initial guess for a bearing vector with large uncertainty by a given bearing vector u a multilevel patch is obtained by performing a patch based search of the feature(section III- extracting constant size (here 8x8 pixels) patches, Pl, for B). This basically improves the linearization point of the ekF each image level l at the corresponding pixel coordinate p by using the bearing vector obtained from the patch search T (u). The advantage is that tracking such features is robust u; for evaluating the terms in eqs.(15) and(16). Please note against bad initial guesses and image blur. Furthermore such that the ekf update equations have to be slightly adapted in patch features allow a direct intensity error feedback into the order to account for the altered linearization point. A similar filter. In comparison to reprojection error based algorithms alternative would be to directly employ an iterative EKF. this allows to formulate a more accurate error model which inherently takes into account the texture of the tracked C. Feature Detection and Removal image patch. For instance it also enables the use of edge The detection of new features is based on a standard fast features, whereby the gained information would be along corner detector which provides a large amount of candidate the perpendicular to the edge feature locations After removing candidates which are close By tracking two additional bearing vectors within the to current tracked features, we compute an adapted shi- patch, we can compute an affine warping matrix W C R2X Tomasi score for selecting new features which will be in order to account for the local distortion of the patches added to the state The adapted Shi-Tomasi score basically between subsequent images. We assume that the distance of considers the combined hessian on multiple image levels the feature is large w r.t. the size of the patch and can thus instead of only a single level. It directly approx imates the choose the normal of the patches to point towards the center Hessian of the above gradient matrix with H=ATPA() of the camera. also, when a feature was successfully tracked and extracts the mininal eigenvalue. The advantage is that a within a frame, the multilevel patch is re-extracted in order high score is directly correlated with the alignment accuracy to avoid the accumulation of errors of the corresponding multilevel patch feature. Instead of returning the minimal eigenvalue. the method can return B. Alignment Equations and QR-decomposition other eigenvalue based scores like the 1- or 2-norm. This Throughout the framework we make use of intensity errors could be useful in environments with scarce corner data in order to pre-align features or update the filter state. whereby the presented filter could be complemented by For a given image pyramid with images In and a given available edge-Shaped features. Finally, the detection process multilevel patch feature(with coordinates p and patches Pi is also coupled to a bucketing technique in order to achieve the following intensity errors can be evaluated for image level a good distribution of the features within the image frame and patch pixel p Due to the fact that we can only track a limited numbei elj-Pl(p; )-Ii(psu+wp;)-,(17) management system to ensure that only reliable landmarks are inserted and kept in the filter state. Here, we fall back where the scalar SI=0.5 accounts for the down-sampling to heuristic methods, where we compute quality scores in between the images of the image pyramid. Furthermore, by subtracting the mean intensity error m we can account for order to decide whether a feature should be kept or not inter-frame illumination changes The overall idea is to evaluate a local(only last few frames) or regular patch alignment. the squared error terms of and a global (how good was the feature tracked since it has F eq.(17 can be summed over all image levels and patch pix- been detected)quality score and remove the features below a certain threshold. Using an adaptive threshold we can control els and combined into a single Gauss-Newton optimization the total amount of features which are currently in the frame in order to find the optimal patch coordinates. However, the direct use of such a large number of error terms within an IV. RESULTS AND DISCUSSION EKF would make it computationally intractable. In order to tackle this issue we apply a QR-decomposition on the linear A. Experimental SetuP equation system resulting from stacking all error terms in The data for the experiments were recorded with the VI eq.(17) together for given estimated coordinates p Sensor [201, equipped with two time-synchronized, global shutter, wide-VGA 1/ 3 inch imagers in a fronto-parallel b(P)=A()61 (18) stereo configuration. The cameras are equipped with lenses where A(p)can be computed based on the patch intensity with a diagonal field of view of 120 degrees and are gradients. Independent of the rank of the matrix A(p) factory-calibrated by the manufacturer for a standard pinhole can be used to obtain an projection model and a radial-tangential distortion model The imagers are hardware time-synchronized to the imu to equivalent reduced linear equation system ensure mid-exposure IMU triggering In the context of this b(p)=A(P), (19)work only the image stream from one camera is required. Ground truth is provided through an external motion with A(P)ER2X2 and b(p) R2. Since we assume that capture system for the pose of the sensor. The rate of the IMU the additive noise magnitude on the intensities is equal for measurements is 200 Hz and the image frame rate is 20Hz every patch pixel we can leave it out of the above derivations The employed IMU is an industrial-grade ADiS 16448, with (it will remain constant for every entry an angular random walk of 0.66 deg/VHz and a velocity One interesting remark is, that due to the scaling factor st random walk of 0.11 m/s/Hz. The maximal number of in eq.(17), error terms for higher image levels will a have features in the state is set to 50 and the algorithm is run using eaker corrective influence on the filter state or the patch image pyramids with 4 levels. Whenever possible, covariance alignment On the other hand, their increased robustness w.r. t. parameters are selected based on hardware specifications image blur or bad initial alignment strongly increases the Strong tuning was not necessary, and the franework works robustness of the overall alignment method for multilevel well for a large range of parameters. The initial TMU-camera patch features extrinsic are only roughly guessed(the translation is set to zero and the initial inverse distance parameter for a feature is set to 0.5 m-I with a standard deviation of lm-1.A Relative Error Plot 10 features screenshot of the running framework is depicted in fig. 2 atures 30 featuree 40 features reference batch framewor Traveled distarce] Fig. 3. Gray lincs arc thc rclativc crrors of the prcscntcd approach, whcrc the darkest lines corresponds to 50 features and the brightest line to 10 features respectively. The dashed green line represents the performance of Fig. 2. Screenshot of the running visual-inertial odometry framework. The the reference batch optimization framework. 2- uncertainty ellipses of the predicted feature locations are in yellow, whereby only features which are newly initialized (stretched ellipses) and TABLE I fcaturcs which rc-cntcr thc framc havc a significant uncertainty. Grccn points TIMINGS OF PRESENTED APPROACH PER PROCESSED IMAGE are the locations after the update step. Green numbers are the tracking counts (I for newly initialized features ). In the top left a virtual horizon is depicted Tot. Fcaturcs 10 30 Timing[ms」66510.5014.8721.4829.72 B. Experiment with Slow Motions An experiment with slow to medium fast hand-held mo tions of about l min was carried out to evaluate the per- of an Intel 17-2760QM. The setup with 50 features uses an formance of the framework with different numbers of total average processing time of 29.72 ms per processed image features(from 10 to 50 in steps of 10). The performance and can thus easily be run was assessed by computing the relative position error w.r.t the traveled distance [9]. Furthermore we compared the C. Experiment with fast Motions obtained results to a batch optimization framework along Here, we evaluate the robustness of the proposed approach the lines of [16]. figure 3 depicts the extracted relative w r t very fast motions. We recorded a hand-held dataset with error values. The achieved performance tends to be similar mean rotational rate of around 3.5 rad/s and with peaks of to the one of the batch optimization framework and often up to 8 rad/s. The motion capture system exhibited a relative achieves slightly higher accuracy. While these results de- high number of bad tracking, whereby we filtered them out pend on the specific dataset and parameter tuning, we also as good as possible. We investigate the tracking performance have to mention that the relatively high rotational motion of the attitude and of the robocentric velocities, where (average of around 1. 5 rad/s)favors approaches which can the corresponding estimates with o-bounds are plotted in handle arbitrarily short feature tracks. Given the undelayed figs. 4 and 5 respectively. It can clearly be seen that the initialization of feature within our approach, the resulting estimates nicely fit the ground truth data from the motion filter is able to extract visual information from a feature's capture. As known from previous work the inclination angles second obseryation onwards and the robocentric velocities of visual-inertial setups are Surprisingly, the performance was relatively independent fully observable [17], and we can nicely observe the initial of the total amount of tracked features. A significant drop decrease of the corresponding covariance(especially when in accuracy could only be observed with feature counts the system gets excited). On the other hand the yaw angle below 20. This observation can have different reasons. One is unobservable and drifts slowly with time could be the type of sensor motions with relatively high Figures 6 and 7 depict the estimation of the calibration rotational rates, which can lead to more bad features or parameters. Again, the estimates together with their 30 outliers. Another point is also that our approach considers bounds are plotted. Depending on the excitation of the 256=4x8x8 intensity errors per tracked features and system the estimated values converge relatively quickly. It hus we cannot directly compare to standard feature tracking can be observed, that the translational term of the IMU based visual odometry frameworks, which typically require camera calibration requires a lot of rotational motion in order nuch higher feature counts. More in-depth evaluation of to converge appropriately. For the presented experiment, the this effect will be part of future work. The timings of the accelerometer bias exhibits the worse convergence rate but proposed framework are listed in table I for a single core is still within a reasonable range Furthermore, we also observed a divergence mode for the IMU Bias presented approach. It can occur when the velocity estimate diverge, e. g, due to missing motion or too many outliers. The 0D4 problem is then, that the filter attempts to minimize the effect of the erroneous velocity on the bearing vectors by setting the distance of the features to infinity. This again lowers any corrective effect on the diverging velocity resulting in further divergence. All in all this was very rarely observed for regular usage, especially if the system was properly excited 一AM at the start 日 Roll-Pitch-Yaw 0.05Exxs红 20 Fig. 6. Estimatcd IMU biases. Top: gyroscope bias(red: x, blu celerometer bias (red: x, blue: green: z). The gyroscope biases exhibit a better convergence than the accelerometer biases, probably due to the more direct link of rotational rates to visual erro IMU-Can 日 tlal Euler angle estimates. Red: estimate, blue: motion capture, red dashed: 30-bound. Only the yaw angle is not observable and exhibits a growing covariance. The inclination angles (roll and pitch) exhibit a high quality tracking accuracy 时时 Fig. 7. Estimated IMU-camera extrinsic. Top: translation (red: X, blue: y, green:z), bottom: orientation(red: yaw, blue: pitch, green: roll). Especially when sufficiently excited, the estimates converge quickly. The reached g一减M个MM种侧 respond approximately to the ones obtained from an ofFline calibration reliminary experiments on a real robot. the special aspect here is that the visual-inertial odometry framework g计体减WW减购 initialized on the ground without any previous calibration motions,i.e. the calibration parameters had to converge during take-off. The output of the filter was directly used for feedback control of the uav fi d Igure 8 depl ts the estimated g. 5. Velocity estimates. Red: estimate, blue: motion capture, red dashed position output of the framework during take-off, flying and 30-bound. The robocentric velocity is fully observable and thus exhibits a landing. If compared to the motion capture system the filter bounded uncertainty. It very nicely tracks the reference from the motion exhibits a certain offset which can be mainly attributed to capture system(and probably also exhibits a higher precision) the online calibration of the filter V. CONCLUSION D. Flying experiments In this paper we presented a visual-inertial filtering frame Implementing the framework on-board a UAV with a work which uses direct intensity errors as visual measure forward oriented visual-inertial sensor, we also performed ments within the extended Kalman filter update By choosin [4]J. Civera, O. G. Grasa, A.J. Davison, and J M. M. Montiel, " l-point RANSAC for EKF-based Structure from Motion in IEEE/RS/ Int Conf. on Intelligent robots and Systems, 2009 Robot position [5A.I. Davison, Real-Time Simultaneous L ocalisation and Mapping with a Single Camera, in IEEE Int. Conference on Computer vision [6] J. Engel, T. Schops, and D. Cremers,"LSD-SLAM: Large-Scale Direct Monocular SLAM. in European Conf. on Computer vision, [7 C. Forster, M. Pizzoli, and D. Scaramuzza, " SVO: Fast Semi-Direct Monocular Visual odometry, in IEEE Int. Conf on Robotics and Automation. 2014. [8S. Gehrig, F. Eberli, and T. Meyer, "A Real-Time Low-Power Stereo Vision Engine Using Semi-Global Matching Computer Vision Sys ems,vol.5815.pp.134-143.2009 [ A. Geiger, P. Lenz, C. Stiller, and R. UrLasull, "Vision Neets robotics The KITti dataset, The International Journal of Robotics Research, vol.32,no. October,pp.1231-1237,2013 [10]C. Hertzberg, R. Wagner, U. Frese, and L. Schroder, " Integratin generic sensor fusion algorithms with sound state representations through encapsulation of manifolds, Information Fusion, vol. 14 Fig. 8. Estimatcd trajectory(red)on-board a UAV compared to groundtruth 0.1,pp.57-77,201 (blue)from the motion capture system. During take-off, flying, and landing lll G. P. Huang, A. 1. Mourikis, and s. 1. Roumeliotis, Anal ysis and the output of the filter is used to stabilize and control the UAV Calibration improvement of thc consistency of cxtcndcd Kalman filter bascd is performed online SLAM,in IEEE Int Conf on Robotics and Automation, May 2008 [12]E. S. Jones and s. Soatto, Visual-inertial navigation, mapping and localization: A scalable real-time causal approach, "The international Journal of robotics Research, vol. 30, no 4, pp. 407-430, 2011 fully robocentric representation of the filter state together [13]SJ. Julier and I K Uhlmann, "a counter example to the theory of ith a numerically minimal bearing/distance representation simultaneous localization and map building, in IEEE Int. Conf. on Robotics and Automation, May 2001 of features, we avoid major consistency problems while ex- [14]J. Kelly and G.S. Sukhatme,"Visual-Inertial Sensor Fusion hibiting accurate tracking performance and high robustness Localization, Mapping and Sensor-to-Sensor Self-calibration, Int Especially in difficult situations with very fast motions ol Journal of Robotics Reseurch, voL 30, 10. 1, pp. 56-79, NoV. 2011 [15] C. Kerl,J. Sturm, and D. Cremers, Robust odometry estimation for outliers the presented approach manages to keep track of the RGB-D cameras, in IEEE Int. Conf on Robotics and Automation state with only minor drift of the yaw and position estimates The framework can be run on-board a UAv with a feature [16]S. Leutenegger, P. Furgale, V. Rabaud, M. Chli, K. Konolige, and R. Siegwart, " Keyframe-Based Visual-Inertial SLAM using Nonlinear count of 50 at a framerate of 20 hz and was used to stabilize Optimization, in Proceedings of robotics: Science and Systems, the flight of a UAV from take-off to landing Berlin. Germany, June 2013 Future work will include more extensive evaluation of the [17]M. Li and a. I. Mourikis, "High-prccision, consistcnt EKF-bascd visual-inertial odometry, The International Journal of robotics multilevel patch features in context of intensity error based Research,vol.32,no.6.pp.690-711,2013. visual-inertial odometry frameworks. Furthermore we would [8]J. Montiel, J. Civera, and A. Davison, " Unitied Inverse dept also like to try to extend the online calibration in order to Parametrization for Monocular SLAM, in Proceedings of Robotics Science and Systems, Philadelphia, USA, Aug. 2006 include the camera intrinsics. Also, the framework could be [19] A. I. Mourikis and S. I. oumeliotis, "A Multi-State Constraint relatively easily adapted in order to handle multiple cameras Kalman Filter for Vision-aidcd Incrtial Navigation, in IEEE Int. Conf. on robotics and automation 2007 This could improve the filter performance, especially for [20 J Nikolic, J. Rehder M. Burri, P Gohl, S. Leutenegger, P. T. Furgale cases with lack of translational motion. Another option to andR. Siegwart, "A Synchronized Visual-lnerlial Sensor SysteIn with avoid divergence would be to use some heuristics based FPGA Pre-Processing for Accurate Real-Time slam. in EEE Int Conf. on Robotics and Automation, 2014 methods in order to detect such modes and to add zero [21 S. Shen, M. N. Mulgaonkar Y, and V. Kumar, " Initialization-free velocity pseudo-measurements in order to stabilize the filter monocular visual-inertial estimation with application to autonomous detailed observability analysis could also be performed, MAVS, in International Symposium on Experimental Robotics, 2014 where the dependency of unobservable modes w.r. t. sensor [22] J. Sola, T. Vidal-Calleja, J. Civera, and J. M. M. Montiel, "Impact of landmark paranelrizatiunl on monocular EKF-SLAM with points and motions would be of high interest lines, "International Journal of Computer Vision, vol. 97, pp. 339-368 2012 [23 A. Stelzer, H. Hirschmuller, and M. gorner "Stereo-vision- based REFERENCES navigation of a six-legged walking robot in unknown rough terrain, The International Journal of Robotics Research, vol. 31, no 4, pp [l] C. Audras, A. I Comport, M. Meilland, and P. Rives, Real-time dense 381-40)2,Feb.20l2 appearance-hased SL AM for RGB-D sensors, "in Australasian Conf: [24 S. Weiss, M. Achtelik,S. Lynen, L. Kneip, M. Chli, and R. Siegwart, n robotics and automation . 201 1 'Monocular vision for long -tcrm micro acrial vchicle statc estima- [2] M. Bloesch, S. Omari, and A. Jaeger,"ROVIO, 2015. [Online] tion: A Compendium, Journal of Field Robotics, vol. 30, no 5, pp Availablehttps:/github.com/ethz-asl/rovio 803-831,2013 [3 J. A. Castellanos, J. Neira, and J. D. Tardos, "Limits to the [25 D. Wooden, M. Malchano, K. Blankespoor, A. Howardy, A. A Rizzi consistency of EKF-based SLAM, in 5th IFAC Symp. on Intelligent and M. Raibert, " Autonomous navigation for bigDog, in IEEE Int Autonomous vehicles. 2004 Conf on Robotics and Automation, 2010

...展开详情
试读 8P Robust Visual Inertial Odometry Using a Direct EKF-Based Approach.pdf
立即下载 低至0.43元/次 身份认证VIP会员低至7折
    抢沙发
    一个资源只可评论一次,评论内容不能少于5个字
    关注 私信 TA的资源
    上传资源赚积分,得勋章
    最新推荐
    Robust Visual Inertial Odometry Using a Direct EKF-Based Approach.pdf 50积分/C币 立即下载
    1/8
    Robust Visual Inertial Odometry Using a Direct EKF-Based Approach.pdf第1页
    Robust Visual Inertial Odometry Using a Direct EKF-Based Approach.pdf第2页
    Robust Visual Inertial Odometry Using a Direct EKF-Based Approach.pdf第3页

    试读已结束,剩余5页未读...

    50积分/C币 立即下载 >