<img src="assets/magicleap.png" width="240">
### Research @ Magic Leap
# SuperPoint Weights File and Demo Script
## Introduction
This repo contains the pretrained SuperPoint network, as implemented by the originating authors. SuperPoint is a research project at Magic Leap. The SuperPoint network is a fully convolutional deep neural network trained to detect interest points and compute their accompanying descriptors. The detected points and descriptors can thus be used for various image-to-image matching tasks. For more details please see
* Full paper PDF: [SuperPoint: Self-Supervised Interest Point Detection and Description](https://arxiv.org/abs/1712.07629)
* Presentation PDF: [Talk at CVPR Deep Learning for Visual SLAM Workshop 2018](assets/DL4VSLAM_talk.pdf)
* Authors: *Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich*
This demo showcases a simple sparse optical flow point tracker that uses SuperPoint to detect points and match them across video sequences. The repo contains two core files (1) a PyTorch weights file and (2) a python deployment script that defines the network, loads images and runs the pytorch weights file on them, creating a sparse optical flow visualization. Here are videos of the demo running on various publically available datsets:
Freiburg RGBD:
<img src="assets/processed_freiburg.gif" width="240">
KITTI:
<img src="assets/processed_kitti.gif" width="480">
Microsoft 7 Scenes:
<img src="assets/processed_ms7.gif" width="240">
MonoVO:
<img src="assets/processed_monovo.gif" width="240">
## Dependencies
* [OpenCV](https://opencv.org/) python >= 3.4
* [PyTorch](https://pytorch.org/) >= 0.4
This repo depends on a few standard pythonic modules, plus OpenCV and PyTorch. These commands usually work (tested on Mac and Ubuntu) for installing the two libraries:
```sh
pip install opencv-python
pip install torch
```
## Running the Demo
This demo will run the SuperPoint network on an image sequence and compute points and descriptors from the images, using a helper class called `SuperPointFrontend`. The tracks are formed by the `PointTracker` class which finds sequential pair-wise nearest neighbors using two-way matching of the points' descriptors. The demo script uses a helper class called `VideoStreamer` which can process inputs from three different input streams:
1. A directory of images, such as .png or .jpg
2. A video file, such as .mp4 or .avi
3. A USB Webcam
### Run the demo on provided directory of images in CPU-mode:
```sh
./demo_superpoint.py assets/icl_snippet/
```
You should see the following output from the ICL-NUIM sequence snippet:
<img src="assets/processed_icl.gif" width="160">
### Run the demo on provided .mp4 file in GPU-mode:
```sh
./demo_superpoint.py assets/nyu_snippet.mp4 --cuda
```
You should see the following output from the NYU sequence snippet:
<img src="assets/processed_nyu.gif" width="160">
### Run a live demo via webcam (id #1) in CPU-mode:
```sh
./demo_superpoint.py camera --camid=1
```
### Run the demo on a remote GPU (no display) on 640x480 images and write the output to `myoutput/`
```sh
./demo_superpoint.py assets/icl_snippet/ --W=640 --H=480 --no_display --write --write_dir=myoutput/
```
### Additional useful command line parameters
* Use `--H` to change the input image height (default: 120).
* Use `--W` to change the input image width (default: 160).
* Use `--display_scale` to scale the output visualization image height and width (default: 2).
* Use `--cuda` flag to enable the GPU.
* Use `--img_glob` to change the image file extension (default: *.png).
* Use `--min_length` to change the minimum track length (default: 2).
* Use `--max_length` to change the maximum track length (default: 5).
* Use `--conf_thresh` to change the point confidence threshold (default: 0.015).
* Use `--nn_thresh` to change the descriptor matching distance threshold (default: 0.7).
* Use `--show_extra` to show more computer vision outputs.
* Press the `q` key to quit.
## BibTeX Citation
```txt
@inproceedings{detone18superpoint,
author = {Daniel DeTone and
Tomasz Malisiewicz and
Andrew Rabinovich},
title = {SuperPoint: Self-Supervised Interest Point Detection and Description},
booktitle = {CVPR Deep Learning for Visual SLAM Workshop},
year = {2018},
url = {http://arxiv.org/abs/1712.07629}
}
```
## Additional Notes
* We do not intend to release the SuperPoint training or evaluation code, please do not email us to ask for it.
* We do not intend to release the Synthetic Shapes dataset used to bootstrap the SuperPoint training, please do not email us to ask for it.
* We use bi-linear interpolation rather than the bi-cubic interpolation described in the paper to sample the descriptor as it is faster and gave us similar results.
## Legal Disclaimer
Magic Leap is proud to provide its latest samples, toolkits, and research projects on Github to foster development and gather feedback from the spatial computing community. Use of the resources within this repo is subject to (a) the license(s) included herein, or (b) if no license is included, Magic Leap's [Developer Agreement](https://id.magicleap.com/terms/developer), which is available on our [Developer Portal](https://developer.magicleap.com/).
If you need more, just ask on the [forums](https://forum.magicleap.com/hc/en-us/community/topics)!
We're thrilled to be part of a well-meaning, friendly and welcoming community of millions.
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
SuperPointPretrainedNetwork for slam frontend (conmtaing KLT )tracking without lower texture environment using by python , you can run it directly and without any change . datya in also in this repo files
资源推荐
资源详情
资源评论
收起资源包目录
SuperPointPretrainedNetwork.zip (149个子文件)
00c88ecbea9176e67d70dbbbeff682bfa49882 162B
058b8ac55fe496c1f55f2935ad8ed1b12f3e24 472KB
08906245b393b0067a036181645ba3da419a03 198B
11909331506c7b6fa159b193c999accfec6caf 472KB
11fc4707ccfbd188b7f96d5a7b3499dcf4b3f8 166B
127e1eb8744fe856e17d25f68c531e24e77c9c 168B
15bc0721360a0ef5fdd7c4564d553a4ac1656d 198B
18d79356501a02f752243b546c9bbd66299b77 199B
1b39e6820f2c5b98a031d9d96ee17dd8591b3b 443KB
1bfde7119fbb9ea979bb2a921044e32f816c70 2KB
2233f80d68685e6f9c22f6a4d43129c42fde25 163B
2342265a1c70a686096232262a5bfef6581a55 468KB
259438148c43b39d20ac7825ed1656c71714e3 3.11MB
2acabe0f3c695cb00e7e8d3489d5d69e75bb60 11KB
36450d025bf64e3b128c5ec4e0366c8dfbb128 14.22MB
3f604c93e8ac1e6020323c998532de75d77b27 370B
401e00f88dcb8b64908dadd1247f0384f21e2e 2.15MB
4691f66e203f6f2f079ec936ec014ddb5001d7 467KB
4755407b295bfa04c4da958612dd5982929a83 449KB
47d5196f6d10f57aefb8fa7d3efe4816e4b431 434KB
4c82afed2c775534ccbb6aa1361e7de088b361 197B
4cd4f81a30ecce9f570de78c38c77f34669073 3.13MB
4d66a71bd08f7fcd85b30d12525f2623ef23bd 2KB
53d8a4aac0fca73825b9e23d0fa8b2059e8491 156B
54276c3b3cccd827faa93a7fe51267dc6dfe67 198B
554bce3ca96e2a0359bd3da5e3c4cf8daa746d 452KB
5d028f59a2a0d51aee5ba82e4973217cf1e512 467KB
5ef1141a0d8643bc7c124741f1ce02bc4c8df4 476KB
60fda089dbbce33744a9720f73c1a91a35e226 475KB
6877f63fdedf8b2c373eb0cde1e5445b5c02a0 152B
708869d0d8277afc55efa090a17a542b8622b5 461KB
719a9e0965a92f989b141759bf906033ace4b6 473KB
737f4a9293bc5e481a7eb03888dfc9ce9f171f 127B
75ed8e7fd938ac6d900885262d6bc6d6a62f81 2KB
77b5c41a8e9490116f2fbccff66fd2769a7ba5 2KB
78f99d1a40bc92db187693a04afa4925009073 471KB
799071d1a373e7e9ad04575e698b1743276239 475KB
79fcbce277e1df293d6b7ef9a8ef66089e7c79 38KB
7a444ca318d487390137d619d49f2f0a3b1ad8 4.71MB
7ad1879dc4b7feeab00840f8727f4f131893e9 464KB
7d3598df6688e0fdcf524e1fc117a92142391c 469KB
7f40d42df586fe8028c57d610a6757a58f103a 2.34MB
805b9f4ac53b59cb3dd68209a0e1fd06caccb6 199B
863519004c5a7d863fa50a041b6f0a19b9803d 2KB
8a8560b15fad4ec68f8ade90eae6e20db5cbc1 473KB
8fca7426b1b73b7e5ffb529370d6c68cfa03fd 2.15MB
92b8ad4736ef68c09397e9f3f19d4bb84e3ebd 4.67MB
9734c95361acaff817284889eba5c95f223427 459KB
9734d03297f12632775e63ba75f68bca8a37fb 11KB
98c7f001663ff2b13304d699f2cdb3f663c28f 168B
98ff1b40ba9dff5bd399a2c53e9a4e181710e6 438KB
9c62521888cef34d3800cf40ac92736b3ca510 197B
9cefc0128d71151e46b3c0386b92febf82cf0a 464KB
9db659d4999f46beb332e5b0404cc35e3001c8 2KB
a070f74629e4d163bc4d78428d8e56e72e686f 166B
a269d33335a03aee08108ef2238654f9aa5aac 838B
a2e632f2d05578b6d1586caa03bb1f6c51e331 153B
a5d7e1e85329b569c90f3ac930aa4c4eed8b98 2KB
a7cf526822a5162107361882c9e60eb5e6e4be 153B
a82149fe5d0a66bb11f0deac71553ccbeedc5d 455KB
ad470e294ef7128aefd419d6eddae7a46579f7 467KB
af0d484dce316f0be32b51d3e6708704bb0214 469KB
b40bc458e519d4116291ed4abb24532ddcc839 469KB
bac5fd47b7a61835b1ff29c6b7e3c305988470 468KB
bcbc4fd8d21a7dd72feedad33769c0d0032b67 2KB
bd6c041394656d3d83459505d1c6a31bb5eb73 319B
c097bd37e582eb979040d8b58c75b0de8b4899 282B
c4d2fd0a6795bec79637c0375a8a0dfef6e693 3KB
config 292B
d7e692e76580085a70af419c7cbc3660acba15 6.52MB
da796addba9b6f8e79d586a3699700a86b1cea 152B
db89f7de68c1beb624a46982382545f4babb2d 198B
description 73B
df7e3abdadb4680127508f8f9a7e44b89231f0 198B
e56c4b389e299511e8ad8ae46db8b2a8aec4be 463KB
e697f1a0b59d7e6ca909674a33e645447119c9 469KB
e7fb9243f42533449151407dda1d13e795aacd 198B
e878f0af658a337839ad71792043ea91b724e2 11KB
e984a7bc045c0cf7a45950890f58c2d9a96560 468KB
exclude 240B
f5b391e4a8bec9f7ea26052e435e4e3987559c 198B
faf6bf3699b07de92363173feefc7a3c87a2d2 198B
fbff3a1ff4fec50e19cab2061fe6d7a734b9fa 359B
ffa8a60cb511988503a4e4301111adf38d40cb 467KB
processed_kitti.gif 6.57MB
processed_ms7.gif 3.19MB
processed_freiburg.gif 3.17MB
processed_monovo.gif 2.36MB
processed_nyu.gif 2.16MB
processed_icl.gif 2.15MB
HEAD 210B
HEAD 210B
HEAD 32B
HEAD 23B
index 4KB
LICENSE 7KB
master 210B
master 41B
README.md 5KB
nyu_snippet.mp4 4.67MB
共 149 条
- 1
- 2
资源评论
大江东去浪淘尽千古风流人物
- 粉丝: 1w+
- 资源: 26
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功