KeyframeBased VisualInertial Odometry Using Nonlinear Optimization.pdf

Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate VisualInertial Odometry or Simultaneous Localization and Mapping (SLAM). While historically the problem
Our framework has been extended to be used with a monocular camera setup We present the necessary adaptations concerning the estimation and bootstrapping parts. The monocular version was needed for fair comparison with the reference implementation of the MsCKF algorithm which is currently only published in a monocular form. The result is a generic Ncamera (> 1) visualinertial odometry framework. In the stereoversion, the performance will gradually transform into the monocular case when the ratio between camera baseline and distance to structure becomes smal We present the formulation for online camera extrinsic estimation that may be applied after standard intrinsics calibration. Evaluation results demonstrate the applicability of this method when initializing with inaccurate camera pose estimates with respect to the IMU We make an honest attempt to present our work to a level of detail that would allow the reader to reimplement our k Various new datasets featuring individual characteristics in terms of motion, appearance, and scene depth were recorded with our new hardware iteration ranging from handheld indoor motion to bicycle riding. The comprehensive evaluation demonstrates superior performance compared to our previously published results, owing to better calibration and hardware synchronization available as well as to algorithmic and softwarelevel adaptations The remainder of this work is structured as follows: in Section II we provide a more detailed overview of how our work relates to existing literature and differentiates itself. Section III introduces the notations and definitions used throughout this article. The nonlinear error terms from camera and IMu measurements are described indepth in Section IV, which is then followed by an overview of frontend processing and initialization in Section v. as a last key element of the method Section VI introduces how the keyframe concept is applied by marginalization. Section Vll describes the experimental setup, evaluation scheme and presents extensive results on the different datasets IL. RELATED WORK The visiononly algorithms which form the foundation for todays VINs can be categorized into batch Structurefrom Motion (SfM) and filtering based methods. Due to computational constraints, for a long time, Visionbased realtime odometry or SLAM algorithms such as those presented in Davison(2003) were only possible using a filtering approach Subsequent research(Strasdat et al, 2010), however, has shown that nonlinear optimization based approaches, as commonly used for offline SfM, can provide better accuracy for a similar computational work when compared to filtering approaches, given that the structural sparsity of the problem is preserved. Henceforth, it has been popular to maintain a relatively sparse graph of keyframes and their associated landmarks subject to nonlinear optimizations( Klein and Murray, 2007) The earliest results in ViNs originate from the work of Jung and Taylor (2001) for(spline based) batch and of Chai et al.(2002); Roumeliotis et al.(2002)for filtering based approaches. Subsequently, a variety of filtering based approaches have been published based on EKFs(Kim and Sukkarieh, 2007; Mourikis and Roumeliotis, 2007; Li and Mouriki, 2012a Weiss et al, 2012: Lynen et al., 2013), Iterated EKFs(IEKFs)(Strelow and Singh, 2004, 2003)and Unscented Kalman Filters(UKFs)(Shin and ElSheimy, 2004; Ebcin and Veth, 2007: Kelly and Sukhatme, 201 1)to name a few, which over the years showed an impressive improvement in precision and a reduction computational complexity. Today such 6 DoF visualinertial estimation systems can be run online on consumer mobile devices(Li and mouriki, 2012c, Li et al., 2013) In order to limit computational complexity, many works follow the loosely coupled approach. Konolige et al.(2011) integrate IMU measurements as independent inclinometer and relative yaw measurements into an optimization problem using stereo vision measurements. In contrast, Weiss et al. (2012) use visiononly pose estimates as updates to an EKF with indirect IMU propagation. Similar approaches can be followed for loosely coupled batch based algorithms such as in Ranganathan et al. (2007)and Indelman et al(2012), where relative stereo pose estimates are integrated into a factorgraph with nonlinear optimization including inertial terms and absolute GPs measurements. It is well known that loosely coupled approaches are inherently suboptimal since they disregard correlations amongst internal states of different sensors A notable contribution in the area of filtering based VINS is the work of Mourikis and Roumeliotis(2007)who proposed an ekf based realtime fusion using monocular vision, called the MultiState Constraint Kalman Filter (MSCKF) which performs nonlineartriangulation of landmarks from a set of camera poses over time before using them in the eKf update This contrasts with other works that only use visual constraints between pairwise camera poses(Bayard and Brugarolas 2005). Mourikis and Roumeliotis (2007) also show how the correlations between errors of the landmarks and the camera locationswhich are introduced by using the estimated camera poses for triangulationcan be eliminated and thus result in an estimator which is consistent and optimal up to linearization errors. Another monocular visualinertial filter was proposed by Jones and Soatto(2011), presenting results on a long outdoor trajectory including IMU to camera calibration and loop closure. Li and Mourikis(2013) showed that further increases in the performance of the MSCKF are attainable by switching between the landmark processing model, as used in the msCKF, and the full estimation of landmarks, as employed by EKFSLAM Further improvements and extensions to both loosely and tightlycoupled filtering based approaches include an alternative rotation parameterization(Li and Mourikis, 20126), inclusion of rolling shutter cameras (Jia and Evans, 2012; Li et al 2013), offline (Lobo and Dias, 2007; Mirzaei and roumeliotis, 2007, 2008) and online ( weiss et al 2012; Kelly and Sukhatme, 2011; Jones and Soatto, 2011; DongSi and Mourikis, 2012)calibration of the relative position and orientation of camera and Imu In order to benefit from increased accuracy offered by relinearization in batch optimization, recent work focused on approximating the batch problem in order to allow realtime operation. Approaches to keep the problem tractable for online estimation can be separated into three groups(Nerurkar et al., 2013): Firstly, incremental approaches, such as the factorgraph based algorithms by Kaess et al.(2012); Bryson et al. (2009), apply incremental updates to the problem while factorizing the associated information matrix of the optimization problem or the measurement Jacobian into square root form(Bryson et al 2009: Indelman et al., 2012). Secondly, fixedlag smoother or slidingwindow filter approaches (DongSi and Mourikis 2011; Sibley et al, 2010; Huang et aL., 2011) consider only poses from a fixed time interval in the optimization. Poses and landmarks which fall outside the window are marginalized with their corresponding measurements being dropped Forming nonlinear constraints between different optimization parameters in the marginalization step however destroys the sparsity of the problem, such that the window size has to be kept fairly small for realtime performance. The smaller the window, however, the smaller the benefit of repeated relinearization. Thirdly, keyframe based approaches preserve sparsity y maintaining only a subset of camera poses and landmarks and discard(rather than marginalize) intermediate quantities Nerurkar et al. (2013) present an efficient offine map algorithm which uses all information from nonkeyframes and landmarks to form constraints between keyframes by marginalizing a set of frames and landmarks without impacting the sparsity of the problem. While this form of marginalization shows small errors when compared to the full batch MAP estimator, we target a version with a fixed window size suitable for online and realtime operations. In this article and our previous work(Leutenegger et al, 2013)we therefore drop measurements from nonkeyframes and marginalize the respective state. When keyframes drop out of the window over time, we marginalize the respective states and some landmarks commonly observed to form a (linear) prior for a remaining subpart of the optimization problem. Our approximation scheme strictly keeps the sparsity of the original problem. This is in contrast to e. g. Sibley et al.(2010), who accept some loss of sparsity due to marginalization. The latter sliding window filter, in a visualinertial variant, is used for comparison in Li and Mourikis (2012a): it proves to perform better than the original MsCKF, but interestingly, an improved MSCKF variant using firstestimate Jacobians yields even better results. We aim at performing similar comparisons between an MSCKF implementationthat includes the use first estimate Jacobiansand our keyframe as well as optimization based algorithm Apart from the differentiation between batch and filtering approaches, it has been a major interest to increase the estimation accuracy by studying the observability properties of VINs. There is substantial work on the observability properties given a particular combination of sensors or measurements(Martinelli, 2011; Weiss, 2012)or only using data from a reduced set of IMU axes(Martinelli, 2014 ). Global unobservability of yaw and position, as well as growing uncertainty with respect to an initial pose of reference are intrinsic to the visualinertial estimation problem(Hesch et al 2012b; Huang et al., 2013; Hesch et al., 2013). This property is therefore of particular interest when comparing filtering approaches to batchalgorithms: the representation of pose and its uncertainty in a global frame of reference usually becomes numerically problematic as the uncertainty for parts of the state undergoes unbounded growth, while remaining low for the observable sub parts of the state. Our batch approach therefore uses a formulation of relative uncertainty of keyframes to avoid expressing global uncertainty r Unobservability of the vins problem poses a particular challenge to filtering approaches where repeated linearization typically not possible: Huang et al.(2009) have shown that these linearization errors may erroneously render parts of the estimated state numerically observable. Hesch et al. (2012a) and others(Huang et al., 2011: Kottas et al., 2012; Hesch et al., 2012b, 2013; Huang et aL., 2013) derived formulations allowing to choose the linearization points of the VINS system in a way such that the observability properties of the linearized and nonlinear system are equal. In our proposed algorithm, we employ firstestimate Jacobians, 1. e. whenever linearization of a variable is employed, we fix the linearization point for any subsequent linearization involving that particular variable II. NOTATION AND DEFINITIONS A. Notation We employ the following notation throughout this work: Fa denotes a reference frame A t p represented in frame F a is written as position vector Arp, or Arp when in homogencous coordinates. A transformation between frames is represented by a homogeneous transformation matrix TAB that transforms the coordinate representation of homogeneous points from b to FA. Its rotation matrix part is written as CAB; the corresponding quaternion is written S qAB mle s, e and n representing the imaginary and real parts We adopt the notation introduced in Barfoot et al.(2011): concerning the quaternion multiplication qAC = qAB Bc, we introduce a lefthand side compound Taking velocity as an example of a physical quantity represented in frame Fa that relates frame FB and ec] qAB operator and a righthand side operator that output matrices such that qAC =AB qBC =9BC qAB c,we write AVBC, i.e. the velocity of frame B with respect to F,c B. frames The performance of the proposed method is evaluated using an IMU and camera setup schematically depicted in Figure 2 It is used both in monocular and stereo mode where we want to emphasize that our methodology is generic enough to handle an Ncamera setup Inside the tracked body that is represented relative to an inertial frame, w, we distinguish camera frames. F C: (Subscripted wit th i= l,.. N), and the IMUsensor frame, As C Fig. 2. Coordinatc frames involved in the hardwarc setup uscd: two cameras arc placed as a stcrco setup with rcspcctivc frames, Fc, iE(1, 2]. IMU data is acquired in Fs. The algorithms estimate the position and orientation of s with respect to the world (inertial) frame Fw C. States The variables to be estimated comprise the robot states at the image times (index k)xr and landmarkS xL XR holds the robot position in the inertial frame wrs, the body orientation quaternion qws, the velocity expressed in the sensor frame sWws(written in short as sv ), as well as the biases of the gyroscopes be and the biases of the accelerometers ba. Thus, XR IS written as T nrs,qws,s,bg,ba∈R×S 9 Furthermore, we use a partition into the pose states x TLTLTIT wrs, qws and the speedbias states xsb The j landmark is represented in homogeneous(World) coordinates: X,]: wlER. At this point, we set the fourth component to one o Optionally, we may include camera extrinsic estimation as part of an online calibration process. Camera extrinsic noted xc src,qsc.can either be treated as constant entities to be calibrated or timevarying states subjected to a firstorder Gaussian process allowing to track changes that may occur e. g. due to temperatureinduced mechanical deformation of the In general the states live in a manifold, therefore we use a perturbation in tangent space g and employ the group operator B, that is not commutative in general, the exponential exp and logarithm log. Now, we can define the perturbation Idimg g transforms from minimal coordinates to tangent space. Thus we obtain the transformations from and g Ox: =X BX around the estimate x. We use a minimal coordinate representation x e R. a bijective mapping minimal coordinates. 6x=exp(重(6x), 2 x=重(og(6x) 3) Concretely, we use the minimal (3D)axisangle perturbation of orientation da E R which can be converted into its quaternion equivalent dy via the exponential map sine ll oall da oq: exp 0 CoS Therefore, using the group operator we write qws Sqoqws. Note that linearization of the exponential map around Sa=0 vields 6a≈ L where l denotes the identity quaternion. We obtain the minimal robot error state vector OXR 6b2,b∈R15 (6) Analogously to the robot state decomposition xr and xsb, we use the pose error state xr: =[Sp, Sa] and the speed/bias error state Bxsb: =Sv, dbo, dl As landmark perturbation, we use a simple Euclidean version &per that is applied as 81: [8B, 0 by addition IV. BATCH VISUAL SLAM WITH INERTIAL TERMS In this section, we present our approach of incorporating inertial measurements into batch visual SLAM. In visual odometry and SLAM, a nonlinear optimization is formulated to find the camera poses and landmark positions by minimizing the reprojection error of landmarks observed in camera frames. Figure 3 shows the respective graph representation inspired by(Thrun and Montemerlo, 2006): it displays measurements as edges with square boxes and estimated quantities as round nodes. As soon as inertial measurements are introduced, they not only create temporal constraints between successive poses, Many landmarks Many landmarks .Speed /iMU biases d Many keypoint 口 IMU measurements Fig 3. Graphs of the state variables and measurements involved in the visual slam problem (left) versus visualinertial SLaM (right): incorporatin inertial measurements introduces temporal constraints, and necessitates a state augmentation by the robot speed as well as iMU biases but also between successive speed and imu bias estimates of both accelerometers and gyroscopes by which the robot state vector is augmented We seek to formulate the visualinertial localization and mapping problem as one joint optimization of a cost function J(x)containing both the(weighted)reprojection errors Cr and the(weighted) temporal error term from the IMU e I K :=∑∑∑ w ,j,。1,,k 2=1k=1j∈J(i,k) k:=1 visual inertial where i is the camera index of the assembly, k denotes the camera frame index, and j denotes the landmark index. The indices of landmarks visible in the kth frame and the ith camera are written as the set j(i, k). Furthermore, w: 3, k represents the information matrix of the respective landmark measurement, and Wk the information of the kth IMU error Throughout our work, we employ the Google Ceres optimizer(Agarwal et al, n.d. )integrated with our realtime capable C++ software infrastructure In the following, we will present the reprojection error formulation. Afterwards, an overview on iMU kinematics combined with bias term modeling is given, upon which we base the IMU error term A. Reprojection Error Formulation We use a rather standard formulation of the reprojection error adapted with minor modifications from Furgale(2011) h:(TCis Tsw wl (8) Hereby hi( denotes the camera projection model (which may include distortion)and z,stands for the measurement image coordinates. We also provide the Jacobians here, since they are not only needed for efficient solving, but also play a central role in the marginalization step explained in Section V k C S Sw w 3wIsw 06x 0 C: s COX CA s axe k 1×3 where Jr, i denotes the Jacobian matrix of the projection hi ( into the i camera (including distortion) with respect to a landmark in homogeneous landmarks and variables with an overbar represent our current guess. Our framework currently supports radialtangential as well as equidistant distortion models B. IMu Kinematics and Bias model Before being able to formulate the nonlinear IMU term, we overview the differential equations that describe IMU kinematics and bias evolution. The model is commonly used in estimation with IMUs originating from(Savage, 1998) using similar simplifications for MEMSIMUS as in (Shin and ElSheimy, 2004) 1)Nonlinear Model: Under the assumption that the measured effects of the Earth's rotation are small compared to the gyroscope accuracy, we can write the ImU kinematics combined with simple dynamic bias models as wIs WS S qws=22(sw )qws v=sa +waba+ Csw wg(sw) (12) where the elements of w TTTTIT are each uncorrelated zeromean Gaussian white noise processes. sa are accelerometer measurements and wg represents the Earths gravitational acceleration vector. The gyro bias modeled as random walk, and in contrast the accelerometer bias is modeled as a bounded random walk with time constant T>0 The matrix S is formed from the estimated angular rate sw=swwgbg, with gyro measurement sw (13) 2)Linearized Model of the Error States. The linearized version of the above equation around XR will play a major role in the marginalization step. We therefore review it here briefly: the error dynamics take the form SXR N FXRSXR +G(rW, (14) where g is straightforward to derive and WS S wS 03×303×3 0 3×3 WS 3×3 F 3×3 sw lwg (15) 0 0 3×3 3×3 3×3 3×3 0 where [denotes the skewsymmetric crossproduct matrix associated with a vector. Overbars generally stand for evaluation of the respective symbols with current estimates C. Formulation of the IMU Measurement Error Term Figure 4 illustrates the difference in measurement rates with camera measurements taken at time steps k and k +1 as well as faster IMUmeasurements that are not necessarily synchronized with the camera measurements. Note also the Camera measurements k:+1 △t inal Imu measurements p+1 0 resampled IMU measurements zs R Fig 4. Different rates of IM and camera: one MU term uses all accelerometer and gyro readings between successive camera measurements We need the IMU error term *(sR,++l', R between camera measurements, along with respective time increments At introduction of a local time index ro zs )to be a function of robot states at steps h and k+1 as well as of all the IMU measurements inbetween these time instances(comprising accelerometer and gyro readings) summarized as zs. Hereby we have to assume an approximate normal conditional probability density f for given robot states at camera measurements k and k+1 esAR: AR ≈N(0,R (16) We are employing the propagation equations above to formulate a prediction XR, Z s)with associated con ditional covariance P(dxk+lk The respective computation requires numeric integration; as common in related literature(Mourikis and Roumeliotis, 2007), we applied the classical RungeKutta method, in order to obtain discrete time nonlinear state transition equations fd(XR)and the error state transition matrix Fa(XR). The latter is found by integrating SXR=FC(XRSXR over At keeping dXR symbolic Using the prediction we can now formulate the IMU error term as ∈R 15 (17) 1:3 This is simply the difference between the prediction based on the previous state and the actual state except for orientation, where we use a simple multiplicative minimal error Next, upon application of the error propagation law, the associated information matrix Ws is found as: 1 de R  P 6+1k k+1 (18) The jacobian I is straightforward to obtain but nontrivial, since the orientation error will be nonzero in general 0 ⑥ k qws⑧q k+1 0 3×9 (19 1:3,1:3 ×3 9×3 Finally, the Jacobians with respect to oxR and xR+ will be needed for efficient solution of the optimization problem While differentiating with respect to dik+l is straightforward(but nontrivial), some attention is given to the other Jacobian XR Recall that the imu error term(17) is calculated by iteratively applying the prediction. Differentiation with respect to the state dxR thus leads to application of the chain rule, yielding Fa(xR,△t)F4(xR1,△).Fa(x,△t)F4(xR,△ (20) OxR OXR V. FRONTEND OVERⅤIEW This section overviews the image processing steps and data association along with outlier detection and initialization of landmarks and states A. Keypoint Detection, Matching, and Variable initialization Our processing pipeline employs a customized multiscale SsEoptimized Harris corner detector(Harris and Stephens 1988)followed by BrisK descriptor extraction(Leutenegger et al., 2011). The detection scheme favors a uniform keypoint distribution in the image by gradually suppressing corners with weaker corner response close to a stronger corner BRISK would allow automatic orientation detection however, better matching results are obtained by extracting descriptors oriented along the gravity direction that is projected into the image This direction is globally observable thanks to IMU fu As a first step to initialization and matching, we propagate the last pose using acquired IMU measurements in order to obtain a preliminary uncertain estimate of the states Assume a set of past frames (including keyframes) as well as a local map consisting of landmarks with sufficiently well known 3d position is available at this point(see VB for details). As a first stage of establishing correspondences, we perform a 3D2D matching step. Given the current pose prediction, all landmarks that should be visible are considered for bruteforce descriptor matching. Outliers are only rejected afterwards. This scheme may seem illogical to the reader cho might intuitively want to apply the inverse order in the sense of a guided matching strategy; however, owing to the superfast matching of binary descriptors, it would actually be more expensive to first look at imagespace consistency. The outlier rejection consists of two steps: first of all, we use the uncertain pose predictions in order to perform a mahalanobis test in image coordinates. Second, an absolute pose RANSaC provided in OpenGV(Kneip and Furgale, 2014) is applied Next, 2D2D matching is performed in order to associate keypoints without 3D landmark correspondences. Again, we use bruteforce matching first, followed by triangulation, in order to initialize landmark positions and as a first step to rejecting outlier pairings. Both stereotriangulation across stereo image pairs (in the nonmono case)is performed, as well as between the current frame and any previous frame available. Only triangulations with sufficiently low depth uncertainty are labeled to be initializedthe rest will be treated as 2d measurements in subsequent matching finally, a relative ransac step(Kneip and Furgale, 2014) is performed between the current frame and the newest keyframe. The respective pose guess is furthermore used for bootstrapping in the very beginning Figure 5 illustrates a typical detection and matching result in the stereo case. Note the challenging illumination with overexposed sky due to facing towards the sun Fig. 5. Visualization of typical data association on a bicycle dataset: current stereo image pair (bottom) with match lines to the newest keyframe (top) Green stands for a 3D2D match, yellow for 2D2D match, blue for keypoints with leftright stereo match only, and red keypoints are left unmatched KF 2 KF 3 KF 4 Temporal/MU window Fig. 6. Frames kept for matching and subsequent optimization in the stereo case: in this example, M =3 keyframes and S=4 most current frames are used B. Keyframe Selection For the subsequent optimization, a bounded set of camera frames is maintained, i. e. poses with associated image(s) taken at that time instant; all landmarks covisible in these images are kept in the local map. As illustrated in Figure 6 we distinguish two kinds of frames: we introduce a temporal window of the s most recent frames including the current frame; and we use a number of M keyframes that may have been taken far in the past For keyframe selection, we use a
Python入门视频精讲
20180904Python入门视频培训课程以通俗易懂的方式讲解Python核心技术，Python基础，Python入门。适合初学者的教程，让你少走弯路！ 课程内容包括：1.Python简介和安装 、2.第一个Python程序、PyCharm的使用 、3.Python基础、4.函数、5.高级特性、6.面向对象、7.模块、8.异常处理和IO操作、9.访问数据库MySQL。教学全程采用笔记+代码案例的形式讲解，通俗易懂！！！
微信公众平台开发入门
20150106本套课程的设计完全是为初学者量身打造，课程内容由浅入深，课程讲解通俗易懂，代码实现简洁清晰。通过本课程的学习，学员能够入门微信公众平台开发，能够胜任企业级的订阅号、服务号、企业号的应用开发工作。 通过本课程的学习，学员能够对微信公众平台有一个清晰的、系统性的认识。例如，公众号是什么，它有什么特点，它能做什么，怎么开发公众号。 其次，通过本课程的学习，学员能够掌握微信公众平台开发的方法、技术和应用实现。例如，开发者文档怎么看，开发环境怎么搭建，基本的消息交互如何实现，常用的方法技巧有哪些，真实应用怎么开发。
 21KB
文件上传下载测试用例
20190318文件上传下载测试用例，非常全面的测试学习资料，吐血整理。
敏捷项目管理（SCRUM+看板+极限编程XP）
20190604
MATLAB基础入门课程
20170913MATLAB基础入门课程，系统介绍MATLAB的基础知识。 主要从数组、运算、结构和绘图等几方面进行讲解 简单易懂，轻松入门MATLAB
 8.71MB
数值分析原理_吴勃英
20190306数值分析原理科学出版社吴勃英主编课本扫描件,lueluelue
实用数据分析：数据分析师从小白到精通
20200102以实用为第一标准的数据分析方法，详细讲解了数据分析的分析方法、使用场景、分析原理和实战案例，帮助学习者系统打造从数据中获取信息的能力。
 43KB
炉温系统的PID控制器设计——MATLAB程序
20180517本文主要研究的课题是：炉温系统的PID控制器设计研究 ，并且在MATLAB的大环境下进行模拟仿真。 (1)第一章 介绍课题的研究背景、意义以及发展现状。 (2)第二章 建立炉温系统数学模型 (3)第三
 608KB
计算机组成原理实验教程
20111203西北工业大学计算机组成原理实验课唐都仪器实验帮助，同实验指导书。分为运算器，存储器，控制器，模型计算机，输入输出系统5个章节
Python3开发详解
20170922Python3 开发详解，课程从基础的环境搭建讲起，详细讲述了Python开发的方方面面，内容包括：编程基础、函数、数据结构、异常处理、字符串、数字、网络编程、多线程、数据库处理等。
MySQL基础入门视频课程
20181127本课程从零开始，以通俗易懂的方式讲解MySQL技术，手把手教你掌握每一个知识点。课程中使用的所有英文单词都会逐一查询并记录，真正做到零基础入门学习，适合初学者的教程！ 课程内容包括： 1.MySQL简介、安装MySQL 2.查询操作 3.聚合函数和分组统计 4.更新操作 5.表和库的管理 6.约束 7.用户和权限管理 8.事务处理 教学全程采用笔记+代码案例的形式讲解，通俗易懂！！！
C++语言基础视频教程
20150301C++语言基础视频培训课程：本课与主讲者在大学开出的程序设计课程直接对接，准确把握知识点，注重教学视频与实践体系的结合，帮助初学者有效学习。本教程详细介绍C++语言中的封装、数据隐藏、继承、多态的实现等入门知识；主要包括类的声明、对象定义、构造函数和析构函数、运算符重载、继承和派生、多态性实现等。 课程需要有C语言程序设计的基础（可以利用本人开出的《C语言与程序设计》系列课学习）。学习者能够通过实践的方式，学会利用C++语言解决问题，具备进一步学习利用C++开发应用程序的基础。

学院
Java好师父
Java好师父

博客
linux3步编译安装源文件
linux3步编译安装源文件

博客
简述深拷贝和浅拷贝的区别
简述深拷贝和浅拷贝的区别

下载
python.zip
python.zip

下载
CSS cursor 属性  鼠标指针样式效果
CSS cursor 属性  鼠标指针样式效果

学院
网络工程师初级教程
网络工程师初级教程

博客
使用Google Colaboratory跑深度学习
使用Google Colaboratory跑深度学习

下载
padding与lineheight的区别
padding与lineheight的区别

下载
CSS初学福音:解决你对英文单词的不理解
CSS初学福音:解决你对英文单词的不理解

博客
Java进阶知识笔记5【异常、线程】
Java进阶知识笔记5【异常、线程】

下载
菜单 stepdown 呈递下沉的解决方法
菜单 stepdown 呈递下沉的解决方法

博客
kotlin单列
kotlin单列

下载
CSS技巧：IE6用import导入CSS的问题
CSS技巧：IE6用import导入CSS的问题

下载
CSS教程之CSS盒模型
CSS教程之CSS盒模型

下载
css 简单区别ie6,ie7,firefox的写法
css 简单区别ie6,ie7,firefox的写法

博客
leetcode32——最长有效括号 两种方法实现
leetcode32——最长有效括号 两种方法实现

博客
react 对 diffing 算法应用的准备
react 对 diffing 算法应用的准备

学院
Java接口自动化
Java接口自动化

下载
dbpage.rar
dbpage.rar

学院
Linux入门到精通完整系列
Linux入门到精通完整系列

下载
CSS常见的让元素水平居中显示的方法
CSS常见的让元素水平居中显示的方法

下载
不用js可以实现信息提示效果
不用js可以实现信息提示效果

博客
Python  装机系列11 云主机的使用及比较阿里、腾讯、Ucloud
Python  装机系列11 云主机的使用及比较阿里、腾讯、Ucloud

学院
任鸟飞逆向特别篇之安卓逆向实战课
任鸟飞逆向特别篇之安卓逆向实战课

博客
kotlin gson 转集合
kotlin gson 转集合

下载
CSS学习之类目之间竖线的练习实例
CSS学习之类目之间竖线的练习实例

博客
链表03 java
链表03 java

下载
CSS3伪类选择器：nthchild()
CSS3伪类选择器：nthchild()

博客
SpringShardingSpherejdbc 分库 分表
SpringShardingSpherejdbc 分库 分表

学院
任鸟飞逆向C++实战篇之QQ仙侠传
任鸟飞逆向C++实战篇之QQ仙侠传