Robust Visual Inertial Odometry Using a Direct EKFBased Approach.pdf

Abstract— In this paper, we present a monocular visualinertial odometry algorithm which, by directly using pixel intensity errors of image patches, achieves accurate tracking performance while exhibiting a very high level of robustness. After detection, the tracking of the multilevel patch features
marginalize them out using a nullspace decomposition, thus g: attitude of iMu (map from to D) leading to a small filter state size bf: additive bias on accelerometer(expressed in B), In the present paper we propose a visualincrtial odom. buv: additive bias on gyroscope(expressed in b) etry framework which combines and extends several of the C: translational part of IMUcamera extrinsic (ex above mentioned approaches. While targeting a simple and resse ed in 3), consistent approach and avoiding adhoc solutions, we adapt z:rotational part of IMUcamera extrinsic(map from the structure of the standard visualinertial eKFslam for B to 2), mulation [14],[12]. The following keypoints are integrated u, bearing vector to feature i(expressed in v) into the proposed framework Pi: distance parameter of feature Point features are parametrized by a bearing vector and The generic parametrization for the distance di of a feature i a distance parameter with respect to the current frame. a is given by the mapping di d(pi)(with derivative d'(piD) suitable oAlgebra is used for deriving the correspond In the context of this work we mainly tested the inverse ing dynamics and performing filtering operations distance parametrization, d(pi) =1/pi. The investigation of Multilevel patch features are directly tracked within the further parametrization will be part of future work EKE, whereby the intensity errors are used as innovation Rotations(q, z E SO( 3)and unit vectors(u E S)are terms during the update ste parametrized by following the approach of Hertzberg et al A QRdecomposition is employed in order to reduce the [10]. This is required in order to perform operations like high dimensional error terms and thus keep the Kalman computing differences or derivatives as well as represent update computationally tractable ing the uncertainty of the state in a minimal manner. For a purely robocentric representation of the full filter parametrizing unit vectors we employ rotations as underly state is employed. The camera extrinsic as well as the ing representation, whereby we define a Hoperator which additive imu biases are also coestimated returns a difference between two unit vectors within a 2D Together this yields a fully robocentric and direct monocular linear subspace. The advantage of this parametrization is that visualinertial odometry framework which can be run real the tangent space can be easily computed( which is used for time on a single standard CPU core. In several experiments defining the Boperator) on real data we show its reliable and accurate tracking By using the combined bearing vector and distance pa performance while exhibiting a high robustness against fast rameterization, features can be tialized in an undelayed motions and various disturbances. The framework is imple manner, i.e., the features are integrated into the filter at mented in c++ and is available as opensource software [2]. detection. The distance of a feature is initialized with a fixed value or, if sufficiently converged, with an estimate II. FILTER SETUP of the current average scene distance. The corresponding A. Overall Filter Structure and State parametrization covariance is set to a very large value. In comparison to The overall structure of the filter is derived from the one other parameterizations we do not overparametrize the 3D employed in [14,[12]: The inertial measurements are used feature location estimates, whereby each feature corresponds to propagate the state of the filter while the visual informa to 3 columns in the covariance matrix of the state(2 for the tion is taken into account during the filter update steps bearing vector and 1 for the distance parameter). This also a fundamental difference we make use of a fully robocentric avoids the need for reparameterization [221 representation of the filter state which can be seen as an B. State Propagation adaptation of [4](which is visiononly). One advantage of Based on the proper acceleration measurement, f, and this formulation is that problems with unobservable states the rotational rate measurement. W. the eyaluation of the can inheren y be avoided and thus the consistency of the imu driven state propagation results in the following set of estimates can be improved. On the other hand noise from the continuous differential equations(the superscript x denotes gyroscope will affect all states that need to be rotated during the skew symmetric matrix of a vector) he state propagation(see section IIB). However, since the gyroscope noise is relatively small and because most states 7="T十十Uy, (2) are observable this does not represent a significant issue 0 ×v+f+q1(g (3) Three different coordinate frames are used throughout (4) the paper: the inertial world coordinate frame,I, the IMU fixed coordinate frame. B. as well as the camera fixed bf =wbfs (5) coordinate frame, 2. For tracking N visual features, we use b,=wb. (6) the following filter state: :(r,v,q,bf,b,c,z,p40,…,μN,p0,…,pN),(1)元 U z, (8) with N(;)v n(Ai uv+wp. r: robocentric position of MMU (expressed in B), v: robocentric velocity of IMU (expressed in B) p420y/d(p2)+ ,2 (10) where N(u) linearly projects a 3D vector onto the 2D In order to account for moving objects or other di tangent space around the bearing vector p, with the bias turbances, a simple mahalanobis based outlier detection corrected and noise affected imu measurements is implemented within the update step. It compares the obtained innovation with the predicted innovation covariance f and rejects the measurement whenever the weighted norm b1 (12) exceeds a certain threshold. This method inherently takes and with the camera linear velocity and rotational rate into account the covariance of the state and measurements For instance it also considers the image gradients and thereb 1=z(v+dc), (13) tends to reject gradientless image patches easier 0y=2(c) (14) III. MULTILEVEL PATCH FEATURE HANDLING Furthermore, g is the gravity vector expressed in the world Along the lines of other visualinertial EKF approaches coordinate frame, and the terms of the form w* are white ( [14], [12])we fully integrate visual features into the state Gaussian noise processes. The corresponding covariance of the Kalman filter (see also section IIA). Within the parameters can either be taken from the IMU specifications prediction step the new locations of the multilevel patch or have to be tuned manually. USing an appropriate Euler features are estimated by considering the IMUdriven motion forward integration scheme, i.e., using the Boperator where model (eq.(9). Especially if the calibration of the extrinsic appropriate, the above time continuous equation can be and the feature distance parameters have converged, this transformed into a set of discrete prediction equations which yields high quality predictions for the feature locations re used during the prediction of the filter state [10] Please note that the derivatives of bearing vectors and can be easily computed and the computational efionon Additionally, the covariance of the predicted pixel location rotations lie within 2D and 3D vector spaces, respectively. possible prealignment strategy can be adapted accordingly This is required for achieving a minimal and consistent The subsequent update step computes an innovation term representation of the filter state and covariance by evaluating the discrepancy between the projection of the C. Filter Update multilevel patch into the image frame and the image itself Considering the crosscorrelation between the states the eKF For every captured image we perform a state update. We spreads the resulting corrections throughout the filter state assume that we know the intrinsic calibration of the camera In the following the different steps and algorithms involving and can therefore compute the projection of a bearing p to feature handling are discussed in more details. The overall the corresponding pixel coordinate p T(u). As will be workflow for a single feature is depicted in fig. I described in section illB. we derive a 2d linear constraint bi ( T(Ai)), for each feature i which is predicted to be visible Prediction Detect new feature in the current frame with bearing vector A. This linear constraint encompasses the intensity errors associated with a Extract patch if possible specific feature and can be directly employed as innovation term within the kalman update(affected by additive discrete Gaussian pixel intensity noise ni) Warp patch Has good statistic Delete Feature y2=b:(丌(1)+m;, (15) Update ogether with the Jacobian Do prealg H;=A(T(14)x,(12) (16) BY Fig. 1. Overview on the workflow of a feature in the filt stac king the above terms for all visible features we can heuristics for adding and removing features are adapted to the total number directly perform a standard EKF update. However, if the of possible features nitial guess for a certain bearing vector li has a large uncertainty the update will potentially fail. This typically occurs if features get newly initialized and exhibit a large A. Structure and Warping distance uncertainty. In order to avoid this issue we improve For a given image pyramid(factor 2 downsampling) and the initial guess for a bearing vector with large uncertainty by a given bearing vector u a multilevel patch is obtained by performing a patch based search of the feature(section III extracting constant size (here 8x8 pixels) patches, Pl, for B). This basically improves the linearization point of the ekF each image level l at the corresponding pixel coordinate p by using the bearing vector obtained from the patch search T (u). The advantage is that tracking such features is robust u; for evaluating the terms in eqs.(15) and(16). Please note against bad initial guesses and image blur. Furthermore such that the ekf update equations have to be slightly adapted in patch features allow a direct intensity error feedback into the order to account for the altered linearization point. A similar filter. In comparison to reprojection error based algorithms alternative would be to directly employ an iterative EKF. this allows to formulate a more accurate error model which inherently takes into account the texture of the tracked C. Feature Detection and Removal image patch. For instance it also enables the use of edge The detection of new features is based on a standard fast features, whereby the gained information would be along corner detector which provides a large amount of candidate the perpendicular to the edge feature locations After removing candidates which are close By tracking two additional bearing vectors within the to current tracked features, we compute an adapted shi patch, we can compute an affine warping matrix W C R2X Tomasi score for selecting new features which will be in order to account for the local distortion of the patches added to the state The adapted ShiTomasi score basically between subsequent images. We assume that the distance of considers the combined hessian on multiple image levels the feature is large w r.t. the size of the patch and can thus instead of only a single level. It directly approx imates the choose the normal of the patches to point towards the center Hessian of the above gradient matrix with H=ATPA() of the camera. also, when a feature was successfully tracked and extracts the mininal eigenvalue. The advantage is that a within a frame, the multilevel patch is reextracted in order high score is directly correlated with the alignment accuracy to avoid the accumulation of errors of the corresponding multilevel patch feature. Instead of returning the minimal eigenvalue. the method can return B. Alignment Equations and QRdecomposition other eigenvalue based scores like the 1 or 2norm. This Throughout the framework we make use of intensity errors could be useful in environments with scarce corner data in order to prealign features or update the filter state. whereby the presented filter could be complemented by For a given image pyramid with images In and a given available edgeShaped features. Finally, the detection process multilevel patch feature(with coordinates p and patches Pi is also coupled to a bucketing technique in order to achieve the following intensity errors can be evaluated for image level a good distribution of the features within the image frame and patch pixel p Due to the fact that we can only track a limited numbei eljPl(p; )Ii(psu+wp;),(17) management system to ensure that only reliable landmarks are inserted and kept in the filter state. Here, we fall back where the scalar SI=0.5 accounts for the downsampling to heuristic methods, where we compute quality scores in between the images of the image pyramid. Furthermore, by subtracting the mean intensity error m we can account for order to decide whether a feature should be kept or not interframe illumination changes The overall idea is to evaluate a local(only last few frames) or regular patch alignment. the squared error terms of and a global (how good was the feature tracked since it has F eq.(17 can be summed over all image levels and patch pix been detected)quality score and remove the features below a certain threshold. Using an adaptive threshold we can control els and combined into a single GaussNewton optimization the total amount of features which are currently in the frame in order to find the optimal patch coordinates. However, the direct use of such a large number of error terms within an IV. RESULTS AND DISCUSSION EKF would make it computationally intractable. In order to tackle this issue we apply a QRdecomposition on the linear A. Experimental SetuP equation system resulting from stacking all error terms in The data for the experiments were recorded with the VI eq.(17) together for given estimated coordinates p Sensor [201, equipped with two timesynchronized, global shutter, wideVGA 1/ 3 inch imagers in a frontoparallel b(P)=A()61 (18) stereo configuration. The cameras are equipped with lenses where A(p)can be computed based on the patch intensity with a diagonal field of view of 120 degrees and are gradients. Independent of the rank of the matrix A(p) factorycalibrated by the manufacturer for a standard pinhole can be used to obtain an projection model and a radialtangential distortion model The imagers are hardware timesynchronized to the imu to equivalent reduced linear equation system ensure midexposure IMU triggering In the context of this b(p)=A(P), (19)work only the image stream from one camera is required. Ground truth is provided through an external motion with A(P)ER2X2 and b(p) R2. Since we assume that capture system for the pose of the sensor. The rate of the IMU the additive noise magnitude on the intensities is equal for measurements is 200 Hz and the image frame rate is 20Hz every patch pixel we can leave it out of the above derivations The employed IMU is an industrialgrade ADiS 16448, with (it will remain constant for every entry an angular random walk of 0.66 deg/VHz and a velocity One interesting remark is, that due to the scaling factor st random walk of 0.11 m/s/Hz. The maximal number of in eq.(17), error terms for higher image levels will a have features in the state is set to 50 and the algorithm is run using eaker corrective influence on the filter state or the patch image pyramids with 4 levels. Whenever possible, covariance alignment On the other hand, their increased robustness w.r. t. parameters are selected based on hardware specifications image blur or bad initial alignment strongly increases the Strong tuning was not necessary, and the franework works robustness of the overall alignment method for multilevel well for a large range of parameters. The initial TMUcamera patch features extrinsic are only roughly guessed(the translation is set to zero and the initial inverse distance parameter for a feature is set to 0.5 mI with a standard deviation of lm1.A Relative Error Plot 10 features screenshot of the running framework is depicted in fig. 2 atures 30 featuree 40 features reference batch framewor Traveled distarce] Fig. 3. Gray lincs arc thc rclativc crrors of the prcscntcd approach, whcrc the darkest lines corresponds to 50 features and the brightest line to 10 features respectively. The dashed green line represents the performance of Fig. 2. Screenshot of the running visualinertial odometry framework. The the reference batch optimization framework. 2 uncertainty ellipses of the predicted feature locations are in yellow, whereby only features which are newly initialized (stretched ellipses) and TABLE I fcaturcs which rccntcr thc framc havc a significant uncertainty. Grccn points TIMINGS OF PRESENTED APPROACH PER PROCESSED IMAGE are the locations after the update step. Green numbers are the tracking counts (I for newly initialized features ). In the top left a virtual horizon is depicted Tot. Fcaturcs 10 30 Timing[ms」66510.5014.8721.4829.72 B. Experiment with Slow Motions An experiment with slow to medium fast handheld mo tions of about l min was carried out to evaluate the per of an Intel 172760QM. The setup with 50 features uses an formance of the framework with different numbers of total average processing time of 29.72 ms per processed image features(from 10 to 50 in steps of 10). The performance and can thus easily be run was assessed by computing the relative position error w.r.t the traveled distance [9]. Furthermore we compared the C. Experiment with fast Motions obtained results to a batch optimization framework along Here, we evaluate the robustness of the proposed approach the lines of [16]. figure 3 depicts the extracted relative w r t very fast motions. We recorded a handheld dataset with error values. The achieved performance tends to be similar mean rotational rate of around 3.5 rad/s and with peaks of to the one of the batch optimization framework and often up to 8 rad/s. The motion capture system exhibited a relative achieves slightly higher accuracy. While these results de high number of bad tracking, whereby we filtered them out pend on the specific dataset and parameter tuning, we also as good as possible. We investigate the tracking performance have to mention that the relatively high rotational motion of the attitude and of the robocentric velocities, where (average of around 1. 5 rad/s)favors approaches which can the corresponding estimates with obounds are plotted in handle arbitrarily short feature tracks. Given the undelayed figs. 4 and 5 respectively. It can clearly be seen that the initialization of feature within our approach, the resulting estimates nicely fit the ground truth data from the motion filter is able to extract visual information from a feature's capture. As known from previous work the inclination angles second obseryation onwards and the robocentric velocities of visualinertial setups are Surprisingly, the performance was relatively independent fully observable [17], and we can nicely observe the initial of the total amount of tracked features. A significant drop decrease of the corresponding covariance(especially when in accuracy could only be observed with feature counts the system gets excited). On the other hand the yaw angle below 20. This observation can have different reasons. One is unobservable and drifts slowly with time could be the type of sensor motions with relatively high Figures 6 and 7 depict the estimation of the calibration rotational rates, which can lead to more bad features or parameters. Again, the estimates together with their 30 outliers. Another point is also that our approach considers bounds are plotted. Depending on the excitation of the 256=4x8x8 intensity errors per tracked features and system the estimated values converge relatively quickly. It hus we cannot directly compare to standard feature tracking can be observed, that the translational term of the IMU based visual odometry frameworks, which typically require camera calibration requires a lot of rotational motion in order nuch higher feature counts. More indepth evaluation of to converge appropriately. For the presented experiment, the this effect will be part of future work. The timings of the accelerometer bias exhibits the worse convergence rate but proposed framework are listed in table I for a single core is still within a reasonable range Furthermore, we also observed a divergence mode for the IMU Bias presented approach. It can occur when the velocity estimate diverge, e. g, due to missing motion or too many outliers. The 0D4 problem is then, that the filter attempts to minimize the effect of the erroneous velocity on the bearing vectors by setting the distance of the features to infinity. This again lowers any corrective effect on the diverging velocity resulting in further divergence. All in all this was very rarely observed for regular usage, especially if the system was properly excited 一AM at the start 日 RollPitchYaw 0.05Exxs红 20 Fig. 6. Estimatcd IMU biases. Top: gyroscope bias(red: x, blu celerometer bias (red: x, blue: green: z). The gyroscope biases exhibit a better convergence than the accelerometer biases, probably due to the more direct link of rotational rates to visual erro IMUCan 日 tlal Euler angle estimates. Red: estimate, blue: motion capture, red dashed: 30bound. Only the yaw angle is not observable and exhibits a growing covariance. The inclination angles (roll and pitch) exhibit a high quality tracking accuracy 时时 Fig. 7. Estimated IMUcamera extrinsic. Top: translation (red: X, blue: y, green:z), bottom: orientation(red: yaw, blue: pitch, green: roll). Especially when sufficiently excited, the estimates converge quickly. The reached g一减M个MM种侧 respond approximately to the ones obtained from an ofFline calibration reliminary experiments on a real robot. the special aspect here is that the visualinertial odometry framework g计体减WW减购 initialized on the ground without any previous calibration motions,i.e. the calibration parameters had to converge during takeoff. The output of the filter was directly used for feedback control of the uav fi d Igure 8 depl ts the estimated g. 5. Velocity estimates. Red: estimate, blue: motion capture, red dashed position output of the framework during takeoff, flying and 30bound. The robocentric velocity is fully observable and thus exhibits a landing. If compared to the motion capture system the filter bounded uncertainty. It very nicely tracks the reference from the motion exhibits a certain offset which can be mainly attributed to capture system(and probably also exhibits a higher precision) the online calibration of the filter V. CONCLUSION D. Flying experiments In this paper we presented a visualinertial filtering frame Implementing the framework onboard a UAV with a work which uses direct intensity errors as visual measure forward oriented visualinertial sensor, we also performed ments within the extended Kalman filter update By choosin [4]J. Civera, O. G. Grasa, A.J. Davison, and J M. M. Montiel, " lpoint RANSAC for EKFbased Structure from Motion in IEEE/RS/ Int Conf. on Intelligent robots and Systems, 2009 Robot position [5A.I. Davison, RealTime Simultaneous L ocalisation and Mapping with a Single Camera, in IEEE Int. Conference on Computer vision [6] J. Engel, T. Schops, and D. Cremers,"LSDSLAM: LargeScale Direct Monocular SLAM. in European Conf. on Computer vision, [7 C. Forster, M. Pizzoli, and D. Scaramuzza, " SVO: Fast SemiDirect Monocular Visual odometry, in IEEE Int. Conf on Robotics and Automation. 2014. [8S. Gehrig, F. Eberli, and T. Meyer, "A RealTime LowPower Stereo Vision Engine Using SemiGlobal Matching Computer Vision Sys ems,vol.5815.pp.134143.2009 [ A. Geiger, P. Lenz, C. Stiller, and R. UrLasull, "Vision Neets robotics The KITti dataset, The International Journal of Robotics Research, vol.32,no. October,pp.12311237,2013 [10]C. Hertzberg, R. Wagner, U. Frese, and L. Schroder, " Integratin generic sensor fusion algorithms with sound state representations through encapsulation of manifolds, Information Fusion, vol. 14 Fig. 8. Estimatcd trajectory(red)onboard a UAV compared to groundtruth 0.1,pp.5777,201 (blue)from the motion capture system. During takeoff, flying, and landing lll G. P. Huang, A. 1. Mourikis, and s. 1. Roumeliotis, Anal ysis and the output of the filter is used to stabilize and control the UAV Calibration improvement of thc consistency of cxtcndcd Kalman filter bascd is performed online SLAM,in IEEE Int Conf on Robotics and Automation, May 2008 [12]E. S. Jones and s. Soatto, Visualinertial navigation, mapping and localization: A scalable realtime causal approach, "The international Journal of robotics Research, vol. 30, no 4, pp. 407430, 2011 fully robocentric representation of the filter state together [13]SJ. Julier and I K Uhlmann, "a counter example to the theory of ith a numerically minimal bearing/distance representation simultaneous localization and map building, in IEEE Int. Conf. on Robotics and Automation, May 2001 of features, we avoid major consistency problems while ex [14]J. Kelly and G.S. Sukhatme,"VisualInertial Sensor Fusion hibiting accurate tracking performance and high robustness Localization, Mapping and SensortoSensor Selfcalibration, Int Especially in difficult situations with very fast motions ol Journal of Robotics Reseurch, voL 30, 10. 1, pp. 5679, NoV. 2011 [15] C. Kerl,J. Sturm, and D. Cremers, Robust odometry estimation for outliers the presented approach manages to keep track of the RGBD cameras, in IEEE Int. Conf on Robotics and Automation state with only minor drift of the yaw and position estimates The framework can be run onboard a UAv with a feature [16]S. Leutenegger, P. Furgale, V. Rabaud, M. Chli, K. Konolige, and R. Siegwart, " KeyframeBased VisualInertial SLAM using Nonlinear count of 50 at a framerate of 20 hz and was used to stabilize Optimization, in Proceedings of robotics: Science and Systems, the flight of a UAV from takeoff to landing Berlin. Germany, June 2013 Future work will include more extensive evaluation of the [17]M. Li and a. I. Mourikis, "Highprccision, consistcnt EKFbascd visualinertial odometry, The International Journal of robotics multilevel patch features in context of intensity error based Research,vol.32,no.6.pp.690711,2013. visualinertial odometry frameworks. Furthermore we would [8]J. Montiel, J. Civera, and A. Davison, " Unitied Inverse dept also like to try to extend the online calibration in order to Parametrization for Monocular SLAM, in Proceedings of Robotics Science and Systems, Philadelphia, USA, Aug. 2006 include the camera intrinsics. Also, the framework could be [19] A. I. Mourikis and S. I. oumeliotis, "A MultiState Constraint relatively easily adapted in order to handle multiple cameras Kalman Filter for Visionaidcd Incrtial Navigation, in IEEE Int. Conf. on robotics and automation 2007 This could improve the filter performance, especially for [20 J Nikolic, J. Rehder M. Burri, P Gohl, S. Leutenegger, P. T. Furgale cases with lack of translational motion. Another option to andR. Siegwart, "A Synchronized Visuallnerlial Sensor SysteIn with avoid divergence would be to use some heuristics based FPGA PreProcessing for Accurate RealTime slam. in EEE Int Conf. on Robotics and Automation, 2014 methods in order to detect such modes and to add zero [21 S. Shen, M. N. Mulgaonkar Y, and V. Kumar, " Initializationfree velocity pseudomeasurements in order to stabilize the filter monocular visualinertial estimation with application to autonomous detailed observability analysis could also be performed, MAVS, in International Symposium on Experimental Robotics, 2014 where the dependency of unobservable modes w.r. t. sensor [22] J. Sola, T. VidalCalleja, J. Civera, and J. M. M. Montiel, "Impact of landmark paranelrizatiunl on monocular EKFSLAM with points and motions would be of high interest lines, "International Journal of Computer Vision, vol. 97, pp. 339368 2012 [23 A. Stelzer, H. Hirschmuller, and M. gorner "Stereovision based REFERENCES navigation of a sixlegged walking robot in unknown rough terrain, The International Journal of Robotics Research, vol. 31, no 4, pp [l] C. Audras, A. I Comport, M. Meilland, and P. Rives, Realtime dense 38140)2,Feb.20l2 appearancehased SL AM for RGBD sensors, "in Australasian Conf: [24 S. Weiss, M. Achtelik,S. Lynen, L. Kneip, M. Chli, and R. Siegwart, n robotics and automation . 201 1 'Monocular vision for long tcrm micro acrial vchicle statc estima [2] M. Bloesch, S. Omari, and A. Jaeger,"ROVIO, 2015. [Online] tion: A Compendium, Journal of Field Robotics, vol. 30, no 5, pp Availablehttps:/github.com/ethzasl/rovio 803831,2013 [3 J. A. Castellanos, J. Neira, and J. D. Tardos, "Limits to the [25 D. Wooden, M. Malchano, K. Blankespoor, A. Howardy, A. A Rizzi consistency of EKFbased SLAM, in 5th IFAC Symp. on Intelligent and M. Raibert, " Autonomous navigation for bigDog, in IEEE Int Autonomous vehicles. 2004 Conf on Robotics and Automation, 2010
 视觉惯性里程计 综述 VIO Visual Inertial Odometry msckf ROVIO ssf msf okvis ORBVINS VINSMono gtsam 2577420180724视觉惯性里程计 VIO  Visual Inertial Odometry 视觉−惯性导航融合SLAM方案 视觉惯性SLAM专栏 VINS技术路线与代码详解 VINS理论与代码详解0——理论基础白话篇 vio_data_simulation VIO数据测试仿真 视觉惯性单目SLAM知识 IO和之前的几种SLAM最大的不同在于两点： 首先，VIO在硬件上需要传感器的融...
 3.76MB
Visuallidar+odometry+and+mapping_+lowdrift,+robust,+and+fast.pdf
20191022Abstract— In this paper, we develop a lowcost stereo visualinertial localization system, which leve
 333KB
robust optimization method. and application bental2002.pdf
20200804那么，费马大定理会不会也是不可证明的呢？人们不得而知，但这种怀疑论足够扑灭绝大多数人的热情。 到了离费马过世差不多三百年的时候，终于又出现了一个振奋人心的角色，但是它却不是一个人，而是，计算机。计算机
 1012KB
on robust meanvariance portfolios.pdf
20201120on robust meanvariance portfolios.pdf
 26.79MB
ArcGISBlueprintsExploretherobustfeaturesofPythontocreaterealworldArcGISapplicationsthroughexcitinghands ....pdf
20190916ArcGISBlueprintsExploretherobustfeaturesofPythontocreaterealworldArcGISapplicationsthr
 1.92MB
Fast_and_Robust_Monocular_VisuaInertial_Odometry_.pdf
20200825Fast_and_Robust_Monocular_VisuaInertial_Odometry_.pdf Fast_and_Robust_Monocular_VisuaInertial_Odom
 3.3MB
LearningConcurrencyinPythonBuildhighlyefficientrobustandconcurrentapplications.pdf.pdf
20190913LearningConcurrencyinPythonBuildhighlyefficientrobustandconcurrentapplications.pdf
 249KB
IE598NHlecture14MSPPart I.pdf
20201116IE598NHlecture14MSPPart I.pdf
 9.96MB
Qt 5 and OpenCV 4 Computer Visi  Zhuo Qingliang.pdf
20190908Create image processing, object detection and face recognition apps by leveraging the power of machi
 1.29MB
A robust assetliability man agement framework.pdf
20200926Gulpinar, N. , Pachamanova, D. , & Canakoglu, E. (2016). A robust assetliability man agement frame
 365KB
论文研究Delaydependent Robust H∞ Control for Uncertain Nonlinear System Based on Fuzzy Hyperbolic Model with TimeVarying Delays.pdf
20190816基于时变时滞模糊双曲模型的不确定非线性系统的时滞相关鲁棒H∞控制，汪刚，，本文研究了一类具有参数不确定性的时变时滞模糊双曲模型（DFHM）系统的鲁棒H∞控制问题。首先，描述了DFHM的建模方法，接着，
 510KB
论文研究基于改进蚁群算法的项目组合工期成本优化.pdf
20190908The timecost tradeoff based on the strategic orientation is one of important questions for the mul
 3.39MB
TestDriven Java Development  Second Edition.pdf
20190716This book will teach the concepts of test driven development in Java so you can build clean, maintai
 294KB
论文研究On the Security of a Robust Certified Email System Based on ServerSupported Signature.pdf
20190816强壮的基于服务器支持的签名认证的电子邮件系统的安全性分析，郭丽峰，，通过认证的电子邮件是标准的电子邮件系统中的增值服务， 它允许发送方以公平的方式将消息传递给接受方。在这个意义上，或者发送��
 3.3MB
Incremental Learning for Robust Visual Tracking.
20151107D. Ross, J. Lim, R.S. Lin, and M.H. Yang. Incremental Learning for Robust Visual Tracking. IJCV, 7
 2.81MB
vld2.5.1setup.exe
20180409Enhanced Memory Leak Detection for Visual C++ Visual Leak Detector is a free, robust, opensource me
 250KB
IE598NHlecture12DRO.pdf
20201116IE598NHlecture12DRO.pdf
 241KB
IE598NHlecture8SAA.pdf
20201116IE598NHlecture8SAA.pdf
 209KB
IE598NHlecture13DRO.pdf
20201116分布鲁棒优化 IE598NHlecture13DRO.pdf
 399KB
IE598NHlecture1011CCP.pdf
20201116IE598NHlecture1011CCP.pdf
 1.16MB
微信小程序 实例汇总 完整项目源代码
20161101微信小程序 实例汇总 完整项目源代码
Python+OpenCV计算机视觉
20181228Python+OpenCV计算机视觉系统全面的介绍。

博客
【实用主义】如何用nodejs自动定时发送邮件提醒？
【实用主义】如何用nodejs自动定时发送邮件提醒？

下载
CodeBlocks.zip
CodeBlocks.zip

下载
基础电子中的光耦合器有何特点
基础电子中的光耦合器有何特点

下载
基础电子中的如何用万用表检测充电电池是否充足电量的介绍
基础电子中的如何用万用表检测充电电池是否充足电量的介绍

学院
类型转换和运算符 视频
类型转换和运算符 视频

下载
jquery分页插件jquery.pagination.js使用方法解析
jquery分页插件jquery.pagination.js使用方法解析

博客
Atitit 提升可读性的艺术 目录 1. 几大原则 2 1.1. 直接原则，无脑原则。。。 2 2. 本地化命名法 2 2.1. 可以使用管理命名法 多个api 比如old api，new ap
Atitit 提升可读性的艺术 目录 1. 几大原则 2 1.1. 直接原则，无脑原则。。。 2 2. 本地化命名法 2 2.1. 可以使用管理命名法 多个api 比如old api，new ap

学院
关键字、标识符，注释 视频
关键字、标识符，注释 视频

博客
Java实现数字博彩加拿大2.8开奖程序,最优开奖,加智能随机开奖,系统通杀(不得已才出)
Java实现数字博彩加拿大2.8开奖程序,最优开奖,加智能随机开奖,系统通杀(不得已才出)

下载
基础电子中的世界各国的电源插头区别的介绍
基础电子中的世界各国的电源插头区别的介绍

学院
人工智能特训营第33期
人工智能特训营第33期

下载
基础电子中的黑色塑科管使用的介绍
基础电子中的黑色塑科管使用的介绍

学院
配置PXE服务器批量安装VMware ESXi6.5/ESXi6.7
配置PXE服务器批量安装VMware ESXi6.5/ESXi6.7

学院
小程序·云开发实战 微信朋友圈所有功能
小程序·云开发实战 微信朋友圈所有功能

下载
基础电子中的市场上出售的2.5V左右的小灯泡有几种不同的外形
基础电子中的市场上出售的2.5V左右的小灯泡有几种不同的外形

下载
大气时尚欧美风扁平化PPT模板
大气时尚欧美风扁平化PPT模板

学院
基于unicloud全栈开发快速入门
基于unicloud全栈开发快速入门

下载
jQuery.deferred对象使用详解
jQuery.deferred对象使用详解

博客
大数据分析笔记 (7)  时间序列分析(Time Series Analysis)
大数据分析笔记 (7)  时间序列分析(Time Series Analysis)

下载
运用命令提示符进行备份和恢复（文档）
运用命令提示符进行备份和恢复（文档）

学院
数组的基础知识
数组的基础知识

下载
基础电子中的如何用简单的方法控制白炽灯的亮度的介绍
基础电子中的如何用简单的方法控制白炽灯的亮度的介绍

下载
素雅古典水墨中国风PPT模板
素雅古典水墨中国风PPT模板

下载
site1.10.131.9.x.zip
site1.10.131.9.x.zip

下载
唯美清新浙江师范大学毕业答辩PPT模板
唯美清新浙江师范大学毕业答辩PPT模板

学院
循环语句(下) 视频
循环语句(下) 视频

学院
基于文本语义的智能问答机器人
基于文本语义的智能问答机器人

下载
命令备份和恢复文件夹.doc
命令备份和恢复文件夹.doc

下载
童趣的海底世界PPT模板
童趣的海底世界PPT模板

学院
小程序开发基础与项目实战
小程序开发基础与项目实战