[![Python 3.7](https://img.shields.io/badge/python-3.7-blue.svg)](https://www.python.org/downloads/release/python-370/)
[![Python 3.8](https://img.shields.io/badge/python-3.8-blue.svg)](https://www.python.org/downloads/release/python-380/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](./LICENSE)
![Linux CI](https://github.com/argoai/argoverse-api/workflows/Python%20CI/badge.svg)
> ### [Argoverse 2 API](https://github.com/argoai/av2-api) has been released! Check out our [NeurIPS 2021 Datasets and Benchmarks publication](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/4734ba6f3de83d861c3176a6273cac6d-Abstract-round2.html) to learn more about the datasets.
---
# Argoverse API
> Official GitHub repository for [Argoverse dataset](https://www.argoverse.org)
---
## Table of Contents
> If you have any questions, feel free to open a [GitHub issue](https://github.com/argoai/argoverse-api/issues) describing the problem.
- [Installation](#installation)
- [Usage](#usage)
- [Demo](#demo)
- [Baselines](#baselines)
- [Contributing](#contributing)
- [Disclaimer](#disclaimer)
- [License](#license)
- [Open-Source Libraries Using Argoverse](#other-repos)
---
## Installation
- Linux
- MacOS
### 1) Clone
- Clone this repo to your local machine using:
```git clone https://github.com/argoai/argoverse-api.git```
### 2) Download HD map data
- Download `hd_maps.tar.gz` from [our website](https://www.argoverse.org/av1.html#download-link) and extract into the root directory of the repo. Your directory structure should look something like this:
```
argodataset
└── argoverse
└── data_loading
└── evaluation
└── map_representation
└── utils
└── visualization
└── map_files
└── license
...
```
### 3) Download Argoverse-Tracking and Argoverse-Forecasting
We provide both the full dataset and the sample version of the dataset for testing purposes. Head to [our website](https://www.argoverse.org/data.html#download-link) to see the download option.
* **Argoverse-Tracking** provides track annotations, egovehicle poses, and *undistorted*, raw data from camera (@30hz) and lidar sensors (@10hz) as well as two stereo cameras (@5hz). We've released a total 113 scenes/logs, separated into 65 logs for training, 24 logs for validating, and 24 logs for testing. We've separated training data into smaller files to make it easier to download, but you should extract them all into one folder.
We also provide sample data (1 log) in `tracking_sample.tar.gz`.
* **Argoverse-Forecasting** contains 327790 sequences of interesting scenarios. Each sequence follows the trajectory of the main agent for 5 seconds, while keeping track of all other actors (e.g car, pedestrian). We've separated them into 208272 training sequences, 40127 validation sequences, and 79391 test sequences.
We also provide sample data (5 sequences) in `forecasting_sample.tar.gz`.
Note that you need to download HD map data (and extract them into project root folder) for the API to function properly. You can selectively download either **Argoverse-Tracking** or **Argoverse-Forecasting** or both, depending on what type of data you need. The data can be extracted to any location in your local machine.
### 4) Install argoverse package
* `argoverse` can be installed as a python package using
pip install -e /path_to_root_directory_of_the_repo/
Make sure that you can run `python -c "import argoverse"` in python, and you are good to go!
### (optional) Install mayavi
* Some visualizations may require `mayavi`. See instructions on how to install Mayavi [here](https://docs.enthought.com/mayavi/mayavi/installation.html).
### (optional) Install ffmpeg
* Some visualizations may require `ffmpeg`. See instructions on how to install ffmpeg [here](https://ffmpeg.org/download.html).
### (optional) Stereo tutorial dependencies
* You will need to install three dependencies to run the [stereo tutorial](https://github.com/argoai/argoverse-api/blob/master/demo_usage/competition_stereo_tutorial.ipynb):
* **Open3D**: See instructions on how to install [here](https://github.com/intel-isl/Open3D).
* **OpenCV contrib**: See instructions on how to install [here](https://pypi.org/project/opencv-contrib-python).
* **Plotly**: See instructions on how to install [here](https://github.com/plotly/plotly.py).
### (optional) Remake the object-oriented label folders
* The `track_labels_amodal` folders contains object-oriented labels (in contrast to per-frame labels in `per_sweep_annotations_amodal` folders. Run following script to remake `track_labels_amodal` folders and fix existing issues:
python3 argoverse/utils/make_track_label_folders.py argoverse-tracking/train/
python3 argoverse/utils/make_track_label_folders.py argoverse-tracking/val/
---
## Usage
The Argoverse API provides useful functionality to interact with the 3 main components of our dataset: the HD Map, the Argoverse Tracking Dataset and the Argoverse Forecasting Dataset.
```python
from argoverse.map_representation.map_api import ArgoverseMap
from argoverse.data_loading.argoverse_tracking_loader import ArgoverseTrackingLoader
from argoverse.data_loading.argoverse_forecasting_loader import ArgoverseForecastingLoader
avm = ArgoverseMap()
argoverse_tracker_loader = ArgoverseTrackingLoader('argoverse-tracking/') #simply change to your local path of the data
argoverse_forecasting_loader = ArgoverseForecastingLoader('argoverse-forecasting/') #simply change to your local path of the data
```
API documentation is available [here](https://argoai.github.io/argoverse-api/). We recommend you get started by working through the demo tutorials below.
---
## Demo
To make it easier to use our API, we provide demo tutorials in the form of Jupyter Notebooks.
To run them, you'll need to first install Jupyter Notebook `pip install jupyter`. Then navigate to the repo directory and open a server with `jupyter notebook`. When you run the command, it will open your browser automatically. If you lose the page, you can click on the link in your terminal to re-open the Jupyter notebook.
Once it's running, just navigate to the `demo_usage` folder and open any tutorial! Note that to use the tracking and forecasting tutorials, you'll need to download the tracking and forecasting sample data from [our website](https://www.argoverse.org/data.html#download-link) and extract the folders into the root of the repo.
### **Argoverse Map Tutorial**
[![](images/map_tutorial.png)](./demo_usage/argoverse_map_tutorial.ipynb)
### **Argoverse-Tracking Tutorial**
[![](images/tracking_tutorial.png)](./demo_usage/argoverse_tracking_tutorial.ipynb)
### **Argoverse-Forecasting Tutorial**
[![](images/forecasting_tutorial.png)](./demo_usage/argoverse_forecasting_tutorial.ipynb)
### Rendering birds-eye-view
Run the following script to render cuboids from a birds-eye-view on the map.
```
$ python visualize_30hz_benchmark_data_on_map.py --dataset_dir <path/to/logs> --log_id <id of the specific log> --experiment_prefix <prefix of the output directory>
```
For example, the path to the logs might be `argoverse-tracking/train4` and the log id might be `2bc6a872-9979-3493-82eb-fb55407473c9`. This script will write to `<experiment prefix>_per_log_viz/<log id>` in the current working directory with images that look like the following: ![](images/MIA_cb762bb1-7ce1-3ba5-b53d-13c159b532c8_315967327020035000.png)
It will also generate a video visualization at `<experiment prefix>_per_log_viz/<log id>_lidar_roi_nonground.mp4`
### Rendering cuboids on images
Run the following script to render cuboids on images.
```
$ python cuboids_to_bboxes.py --dataset-dir <path/to/logs> --log-ids <id of specific log> --experiment-prefix <prefix for output directory>
```
This script can process multiple logs if desired. They can be passed as a comma separated list to `--log-ids`. Images will b
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
Argoverse-Tracking provides track annotations, egovehicle poses, and undistorted, raw data from camera (@30hz) and lidar sensors (@10hz) as well as two stereo cameras (@5hz). We've released a total 113 scenes/logs, separated into 65 logs for training, 24 logs for validating, and 24 logs for testing. We've separated training data into smaller files to make it easier to download, but you should extract them all into one folder. 《VectorNet: Encoding HD Maps and Agent Dynamics from Vectorized Repre》
资源推荐
资源详情
资源评论
收起资源包目录
VectorNet源代码(GNN+Attention+MLP)《VectorNet: Encoding HD Maps and》 (410个子文件)
setup.cfg 40B
theme.css 116KB
basic.css 12KB
pygments.css 4KB
badge_only.css 3KB
graphviz.css 299B
0.csv 365B
lato-italic.eot 262KB
lato-bolditalic.eot 260KB
lato-bold.eot 250KB
lato-regular.eot 248KB
fontawesome-webfont.eot 162KB
roboto-slab-v7-bold.eot 78KB
roboto-slab-v7-regular.eot 76KB
.gitattributes 155B
.gitignore 694B
typing.html 303KB
argoverse.utils.html 279KB
argoverse.data_loading.html 165KB
map_api.html 156KB
argoverse_tracking_loader.html 102KB
argoverse.map_representation.html 94KB
calibration.html 82KB
argoverse.visualization.html 79KB
genindex.html 76KB
cuboid_interior.html 74KB
centerline_utils.html 63KB
visualization_utils.html 61KB
frustum_clipping.html 51KB
mayavi_utils.html 50KB
object_label_record.html 44KB
vis_mask.html 44KB
eval_tracking.html 44KB
vector_map_loader.html 42KB
frame_label_accumulator.html 38KB
map_viz_helper.html 37KB
eval_utils.html 35KB
mpl_plotting_utils.html 34KB
interpolate.html 33KB
plane_visualization_utils.html 32KB
cv2_plotting_utils.html 31KB
argoverse.evaluation.html 28KB
visualize_sequences.html 28KB
manhattan_search.html 26KB
argoverse_forecasting_loader.html 25KB
trajectory_loader.html 25KB
ground_visualization.html 23KB
synchronization_database.html 23KB
index.html 23KB
numpy.html 22KB
simple_track_dataloader.html 21KB
generate_sequence_videos.html 21KB
colormap.html 21KB
py-modindex.html 19KB
heuristic_ground_removal.html 17KB
argoverse.html 16KB
modules.html 16KB
city_visibility_utils.html 16KB
bfs.html 15KB
se2.html 15KB
eval_forecasting.html 15KB
LICENSE.html 15KB
se3.html 15KB
geometry.html 14KB
mpl_point_cloud_vis.html 13KB
line_projection.html 13KB
polyline_density.html 12KB
grid_interpolation.html 10KB
ffmpeg_utils.html 9KB
index.html 9KB
lane_segment.html 9KB
forecasting_evaluation.html 9KB
camera_stats.html 9KB
pose_loader.html 9KB
mesh_grid.html 9KB
pkl_utils.html 8KB
frame_record.html 8KB
CONTRIBUTING.html 8KB
dilation_utils.html 8KB
mayavi_wrapper.html 8KB
ply_loader.html 8KB
transform.html 7KB
json_utils.html 7KB
helpers.html 7KB
subprocess_utils.html 7KB
datetime_utils.html 5KB
search.html 4KB
MANIFEST.in 134B
tox.ini 1KB
objects.inv 4KB
argoverse_map_tutorial.ipynb 5.8MB
argoverse_tracking_tutorial.ipynb 2.15MB
argoverse_ipm_tutorial.ipynb 2MB
competition_tracking_tutorial.ipynb 345KB
argoverse_forecasting_tutorial.ipynb 108KB
competition_stereo_tutorial.ipynb 35KB
competition_forecasting_tutorial.ipynb 31KB
stereo_front_right_rect_1.jpg 985KB
stereo_front_left_rect_1.jpg 972KB
ring_front_center_315966393219913000.jpg 662KB
共 410 条
- 1
- 2
- 3
- 4
- 5
资源评论
- curbsider2023-08-03为啥下的是argoverse的api? #标题与内容不符
- plaiea2024-01-14为啥下的是argoverse的api? #标题与内容不符 #毫无价值
天使DiMaría
- 粉丝: 1w+
- 资源: 53
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功