Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data.pdf


-
Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data 基于多模态数据的单目实时手形和运动捕捉 PDF版本论文

- Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data 1922020-10-28文章目录Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data一. 论文简介二. 模块详解2.1 DetNet2.2 IKNet2.3 Dataset三. 缺点 Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data 一. 论文简介 从单张图像中恢复 2D keypoints + 3D keypoints + mesh,在数据量
- Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data 842021-01-04《Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data》 这篇文章断断续续读了也有几个月了,一直想写写关于此论文的理解,今日终于动笔。先交代一下:接下来的内容将分为两部分,论文原理讲解与代码以运行也遇到的问题。手部姿态估计研究者比较少,关于此文章的中文博客也很少。写下此文希望能对您有所帮助,如有错漏,欢迎指出,不胜感激~ 温馨提示:此文需要配合论文原文食用。 一、论文原理 该论文称精度达到了state-of-the-
- 人体姿态2020(一)4D Association Graph for Realtime Multi-person Motion Capture Using Multiple Video Camera 68282020-07-09原文:4D Association Graph for Realtime Multi-person Motion Capture Using Multiple Video Cameras 收录:CVPR2020 代码:Will be available soon. Abstract 本文提出了一种基于多视图视频输入的实时多人运动捕捉算法,由于严重的自遮挡,多视图运动相互作用密切,对多视图和多时间帧进行关节点.
3.49MB
On-Manifold Preintegration for Real-Time Visual-Inertial Odometry.pdf
2020-04-06Abstract: Current approaches for visual-inertial odometry (VIO) are able to attain highly accurate state estimation via nonlinear optimization. However, real-time optimization quickly becomes infeasible as the trajectory grows over time; this problem is further emphasized by the fact that inerti
2.7MB
CNN-SLAM_ Real-Time Dense Monocular SLAM With Learned Depth Prediction.pdf
2020-04-28一篇结合CNN与SLAM的论文,使用CNN替换了SLAM中的几个模块,所得数据比纯几何SLAM要好,值得一读,翻译我放在了我博客中,仅供参考。
5.38MB
Monocular Visual-Inertial State Estimation With Online Initialization and Camera
2019-11-25Monocular Visual-Inertial State Estimation With Online Initialization and Camera-IMU Extinsic Calibration
8.17MB
CNN-SLAM Real-time dense monocular SLAM with learned depth prediction
2019-05-04一篇slam相关论文,结合了深度学习。用CNN单帧预测深度,可以解决单目slam中尺度不确定性、纯旋转、低纹理区域等问题。
2.23MB
Tightly-Coupled Monocular Visual-Inertial Fusion for AutonomousFlight
2019-03-05Tightly-Coupled Monocular Visual-Inertial Fusion for AutonomousFlight of Rotorcraft MAVs
5.29MB
Keyframe-Based Visual-Inertial Odometry Using Nonlinear Optimization.pdf
2017-11-06Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate Visual-Inertial Odometry or Simultaneous Localization and Mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that non-linear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual-inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochasic cloning sliding-window filter. This competititve reference implementation performs tightly-coupled filtering-based visual-inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy.
583KB
Visual-Inertial Monocular SLAM with Map Reuse.pdf
2017-11-06Abstract— In recent years there have been excellent results in Visual-Inertial Odometry techniques, which aim to compute the incremental motion of the sensor with high accuracy and robustness. However these approaches lack the capability to close loops, and trajectory estimation accumulates drift even if the sensor is continually revisiting the same place. In this work we present a novel tightly-coupled Visual-Inertial Simultaneous Localization and Mapping system that is able to close loops and reuse its map to achieve zero-drift localization in already mapped areas. While our approach can be applied to any camera configuration, we address here the most general problem of a monocular camera, with its well-known scale ambiguity. We also propose a novel IMU initialization method, which computes the scale, the gravity direction, the velocity, and gyroscope and accelerometer biases, in a few seconds with high accuracy. We test our system in the 11 sequences of a recent micro-aerial vehicle public dataset achieving a typical scale factor error of 1% and centimeter precision. We compare to the state-of-the-art in visual-inertial odometry in sequences with revisiting, proving the better accuracy of our method due to map reuse and no drift accumulation.
1.92MB
Fast_and_Robust_Monocular_Visua-Inertial_Odometry_.pdf
2020-08-25Fast_and_Robust_Monocular_Visua-Inertial_Odometry_.pdf Fast_and_Robust_Monocular_Visua-Inertial_Odometry_.pdf Fast_and_Robust_Monocular_Visua-Inertial_Odometry_.pdf
3.97MB
ORB-SLAM_ a Versatile and Accurate Monocular SLAM System.pdf
2019-05-14ORB-SLAM_ a Versatile and Accurate Monocular SLAM System 原版pdf
1.37MB
icra2017 PLSLAM论文
2018-09-05icra2017 PLSLAM论文 PL-SLAM: Real-Time Monocular Visual SLAM with Points and Lines
1.43MB
Robust Visual Inertial Odometry Using a Direct EKF-Based Approach.pdf
2017-11-06Abstract— In this paper, we present a monocular visual-inertial odometry algorithm which, by directly using pixel intensity errors of image patches, achieves accurate tracking performance while exhibiting a very high level of robustness. After detection, the tracking of the multilevel patch features is closely coupled to the underlying extended Kalman filter (EKF) by directly using the intensity errors as innovation term during the update step. We follow a purely robocentric approach where the location of 3D landmarks are always estimated with respect to the current camera pose. Furthermore, we decompose landmark positions into a bearing vector and a distance parametrization whereby we employ a minimal representation of differences on a corresponding σ-Algebra in order to achieve better consistency and to improve the computational performance. Due to the robocentric, inverse- distance landmark parametrization, the framework does not require any initialization procedure, leading to a truly power-up-and-go state estimation system. The presented approach is successfully evaluated in a set of highly dynamic hand-held experiments as well as directly employed in the control loop of a multirotor unmanned aerial vehicle (UAV).
1.17MB
Visual-Inertial-Aided Navigation for High-Dynamic Motion in Built Environments
2020-04-06In this paper, we present a novel method to fuse observations from an inertial measurement unit (IMU) and visual sensors, such that initial conditions of the inertial integration, including gravity estimation, can be recovered quickly and in a linear manner, thus removing any need for special initialization procedures. The algorithm is implemented using a graphical simultaneous localization and mapping like approach that guarantees constant time output. This paper discusses the technical aspects of the work, including observability and the ability for the system to estimate scale in real time. Results are presented of the system, estimating the platforms position, velocity, and attitude, as well as gravity vector and sensor alignment and calibration on-line in a built environment. This paper discusses the system setup, describing the real-time integration of the IMU data with either stereo or monocular vision data. We focus on human motion for the purposes of emulating high-dynamic motion, as well as to provide a localization system for future human–robot interactio
688KB
视觉知觉使用单目相机(Visual-Perception-Using-Monocular-Camera)
2018-11-06matlab 自动驾驶工具箱相机ADAS模块中文说明文档——视觉知觉使用单目相机(Visual-Perception-Using-Monocular-Camera)
337KB
虚拟现实 数据手套
2013-06-14Data glove is widely used interface device which measures hand posture (finger joint angles) and inputs it into a computer in virtual reality field. But it dose not spread throughout home because of expensive interface. On the other hand, researches to recognize a hand posture from photo/video images are performed, which are a kind of hand posture measurement system and called Vision Based Data Glove (VBDG) in recent years. In this paper, we propose a new VBDG system. It estimates hand motion using detected fingertip positions with monocular camera and inverse kinematics. However it cannot estimate hand motion when a fingertip is undetectable with selfocclusion. So our system estimates hidden finger motion, then estimates hand motion. Our experimental results show that the proposed method can estimate it with sufficient accuracy in real-time. Since camera base system is inexpensive, it fits personal use.
2.24MB
CubemapSLAM: A Piecewise-Pinhole Monocular Fisheye SLAM System
2020-07-21We present a real-time feature-based SLAM (Simultaneous Localization and Mapping) system for fisheye cameras featured by a large field-of-view (FoV). Large FoV cameras are beneficial for large-scale outdoor SLAM applications, because they increase visual overlap between consecutive frames and capture more pixels belonging to the static parts of the environment. However, current feature-based SLAM systems such as PTAM and ORB-SLAM limit their camera model to pinhole only. To compensate for the vacancy, we propose a novel SLAM system with the cubemap model that utilizes the full FoV without introducing distortion from the fisheye lens, which greatly benefits the feature matching pipeline. In the initialization and point triangulation stages, we adopt a unified vector-based representation to efficiently handle matches across multiple faces, and based on this representation we propose and analyze a novel inlier checking metric. In the optimization stage, we design and test a novel multi-pinhole reprojection error metric that outperforms other metrics by a large margin. We evaluate our system comprehensively on a public dataset as well as a self-collected dataset that contains real-world challenging sequences. The results suggest that our system is more robust and accurate than other feature-based fisheye SLAM approaches. The CubemapSLAM system has been released into the public domain.
3.95MB
ORB-SLAM a Versatile and Accurate Monocular SLAM System
2032-01-02ORB-SLAM是目前单目SLAM研究中效果比较好的,这篇文章介绍了一个基于ORB-SLAM的系统的框架,以及通过实验与PTAM、LSD-SLAM等的性能比较。
53.57MB
VINS-Mono-master.zip
2018-12-20VINS-Mono is a real-time SLAM framework for Monocular Visual-Inertial Systems. It uses an optimization-based sliding window formulation for providing high-accuracy visual-inertial odometry. It features efficient IMU pre-integration with bias correction, automatic estimator initialization, online ext
10.53MB
LSD-SLAM Large-Scale Direct Monocular SLAM.pdf
2019-09-11LSD-SLAM的英文原文,有英文基础的可以看一看,这篇论文是直接法的代表
3.99MB
ORB-SLAM2--an Open-Source SLAM System for Monocular Stereo and RGB-D Cameras
2018-02-02视觉SLAM经典文章,学好SLAM,前途无量,好好学哦。。
-
下载
Mastering SwiftUI.zip
Mastering SwiftUI.zip
-
下载
2 装饰器模式-MOOC课程内容.pdf
2 装饰器模式-MOOC课程内容.pdf
-
下载
2 代理模式-MOOC课程内容.pdf
2 代理模式-MOOC课程内容.pdf
-
下载
赏析小说艺术手法.docx
赏析小说艺术手法.docx
-
下载
QCustomPlot 2.0.1
QCustomPlot 2.0.1
-
下载
2016-2021年中国色母粒行业专项调研及投资价值预测报告.pdf
2016-2021年中国色母粒行业专项调研及投资价值预测报告.pdf
-
下载
2020数据中心光模块市场分析报告.docx
2020数据中心光模块市场分析报告.docx
-
下载
scartch广播课件.sb2
scartch广播课件.sb2
-
下载
环境监测专业知识基础试题.doc
环境监测专业知识基础试题.doc
-
下载
用node写一个简单的留言板--post请求
用node写一个简单的留言板--post请求
