# The Cityscapes Dataset
This repository contains scripts for inspection, preparation, and evaluation of the Cityscapes dataset. This large-scale dataset contains a diverse set of stereo video sequences recorded in street scenes from 50 different cities, with high quality pixel-level annotations of 5 000 frames in addition to a larger set of 20 000 weakly annotated frames.
Details and download are available at: www.cityscapes-dataset.net
## Dataset Structure
The folder structure of the Cityscapes dataset is as follows:
```
{root}/{type}{video}/{split}/{city}/{city}_{seq:0>6}_{frame:0>6}_{type}{ext}
```
The meaning of the individual elements is:
- `root` the root folder of the Cityscapes dataset. Many of our scripts check if an environment variable `CITYSCAPES_DATASET` pointing to this folder exists and use this as the default choice.
- `type` the type/modality of data, e.g. `gtFine` for fine ground truth, or `leftImg8bit` for left 8-bit images.
- `split` the split, i.e. train/val/test/train_extra/demoVideo. Note that not all kinds of data exist for all splits. Thus, do not be surprised to occasionally find empty folders.
- `city` the city in which this part of the dataset was recorded.
- `seq` the sequence number using 6 digits.
- `frame` the frame number using 6 digits. Note that in some cities very few, albeit very long sequences were recorded, while in some cities many short sequences were recorded, of which only the 19th frame is annotated.
- `ext` the extension of the file and optionally a suffix, e.g. `_polygons.json` for ground truth files
Possible values of `type`
- `gtFine` the fine annotations, 2975 training, 500 validation, and 1525 testing. This type of annotations is used for validation, testing, and optionally for training. Annotations are encoded using `json` files containing the individual polygons. Additionally, we provide `png` images, where pixel values encode labels. Please refer to `helpers/labels.py` and the scripts in `preparation` for details.
- `gtCoarse` the coarse annotations, available for all training and validation images and for another set of 19998 training images (`train_extra`). These annotations can be used for training, either together with gtFine or alone in a weakly supervised setup.
- `gtBboxCityPersons` pedestrian bounding box annotations, available for all training and validation images. Please refer to `helpers/labels_cityPersons.py` as well as the [`CityPersons` publication (Zhang et al., CVPR '17)](https://bitbucket.org/shanshanzhang/citypersons) for more details.
- `leftImg8bit` the left images in 8-bit LDR format. These are the standard annotated images.
- `leftImg16bit` the left images in 16-bit HDR format. These images offer 16 bits per pixel of color depth and contain more information, especially in very dark or bright parts of the scene. Warning: The images are stored as 16-bit pngs, which is non-standard and not supported by all libraries.
- `rightImg8bit` the right stereo views in 8-bit LDR format.
- `rightImg16bit` the right stereo views in 16-bit HDR format.
- `timestamp` the time of recording in ns. The first frame of each sequence always has a timestamp of 0.
- `disparity` precomputed disparity depth maps. To obtain the disparity values, compute for each pixel p with p > 0: d = ( float(p) - 1. ) / 256., while a value p = 0 is an invalid measurement. Warning: the images are stored as 16-bit pngs, which is non-standard and not supported by all libraries.
- `camera` internal and external camera calibration. For details, please refer to [csCalibration.pdf](docs/csCalibration.pdf)
- `vehicle` vehicle odometry, GPS coordinates, and outside temperature. For details, please refer to [csCalibration.pdf](docs/csCalibration.pdf)
More types might be added over time and also not all types are initially available. Please let us know if you need any other meta-data to run your approach.
Possible values of `split`
- `train` usually used for training, contains 2975 images with fine and coarse annotations
- `val` should be used for validation of hyper-parameters, contains 500 image with fine and coarse annotations. Can also be used for training.
- `test` used for testing on our evaluation server. The annotations are not public, but we include annotations of ego-vehicle and rectification border for convenience.
- `train_extra` can be optionally used for training, contains 19998 images with coarse annotations
- `demoVideo` video sequences that could be used for qualitative evaluation, no annotations are available for these videos
## Scripts
There are several scripts included with the dataset in a folder named `scripts`
- `helpers` helper files that are included by other scripts
- `viewer` view the images and the annotations
- `preparation` convert the ground truth annotations into a format suitable for your approach
- `evaluation` validate your approach
- `annotation` the annotation tool used for labeling the dataset
Note that all files have a small documentation at the top. Most important files
- `helpers/labels.py` central file defining the IDs of all semantic classes and providing mapping between various class properties.
- `helpers/labels_cityPersons.py` file defining the IDs of all CityPersons pedestrian classes and providing mapping between various class properties.
- `viewer/cityscapesViewer.py` view the images and overlay the annotations.
- `preparation/createTrainIdLabelImgs.py` convert annotations in polygonal format to png images with label IDs, where pixels encode "train IDs" that you can define in `labels.py`.
- `preparation/createTrainIdInstanceImgs.py` convert annotations in polygonal format to png images with instance IDs, where pixels encode instance IDs composed of "train IDs".
- `evaluation/evalPixelLevelSemanticLabeling.py` script to evaluate pixel-level semantic labeling results on the validation set. This script is also used to evaluate the results on the test set.
- `evaluation/evalInstanceLevelSemanticLabeling.py` script to evaluate instance-level semantic labeling results on the validation set. This script is also used to evaluate the results on the test set.
- `setup.py` run `setup.py build_ext --inplace` to enable cython plugin for faster evaluation. Only tested for Ubuntu.
The scripts can be installed via pip, i.e. from within the scripts:
`sudo pip install .`
This installs the scripts as a python module named `cityscapesscripts` and exposes the following tools, see above for descriptions:
- `csViewer`
- `csLabelTool`
- `csEvalPixelLevelSemanticLabeling`
- `csEvalInstanceLevelSemanticLabeling`
- `csCreateTrainIdLabelImgs`
- `csCreateTrainIdInstanceImgs`
Note that for the grapical tools you additionally need to install:
`sudo apt install python-tk python-qt4`
## Evaluation
Once you want to test your method on the test set, please run your approach on the provided test images and submit your results:
www.cityscapes-dataset.net/submit/
For semantic labeling, we require the result format to match the format of our label images named `labelIDs`.
Thus, your code should produce images where each pixel's value corresponds to a class ID as defined in `labels.py`.
Note that our evaluation scripts are included in the scripts folder and can be used to test your approach on the validation set.
For further details regarding the submission process, please consult our website.
## Contact
Please feel free to contact us with any questions, suggestions or comments:
* Marius Cordts, Mohamed Omran
* mail@cityscapes-dataset.net
* www.cityscapes-dataset.net
没有合适的资源?快使用搜索试试~ 我知道了~
ganilla:GANILLA的Pytorch正式实施
共81个文件
py:38个
sh:14个
png:6个
5星 · 超过95%的资源 需积分: 16 3 下载量 111 浏览量
2021-05-02
11:19:18
上传
评论
收藏 10.78MB ZIP 举报
温馨提示
加尼拉 我们为以下人员提供PyTorch实施: GANILLA:用于图像到插图翻译的生成对抗网络。 更新 (2021年2月),我们发布了有关草图着色的最新工作的代码,以进行草图着色的。 数据集统计: 样本图片: 加尼拉(GANILLA) : 插图数据集上的GANILLA结果: 与其他方法的比较: 使用宫崎骏的动漫图像进行样式转移: 消融实验: 先决条件 Linux,macOS或Windows Python 2或3 CPU或NVIDIA GPU + CUDA CuDNN 入门 下载数据集 有关详细信息,请参考 。 安装 克隆此仓库: git clone https://github.com/giddyyupp/ganilla.git cd ganilla 从http://pytorch.org和其他依赖项(例如, visdom和dominate )安装PyTorch
资源详情
资源评论
资源推荐
收起资源包目录
ganilla-master.zip (81个子文件)
ganilla-master
.gitignore 1KB
options
train_options.py 3KB
test_options.py 1KB
base_options.py 6KB
__init__.py 0B
README.md 5KB
test.py 2KB
environment.yml 247B
docs
qa.md 13KB
figs
ablation_experiments.png 1.01MB
sota_comp.png 2.21MB
ganilla_res.png 2.3MB
miyazaki_res.png 2.68MB
dataset_stats.png 76KB
ill_dataset.png 2.43MB
tips.md 5KB
datasets.md 6KB
train.py 2KB
LICENSE 3KB
datasets
.gitignore 15B
download_cyclegan_dataset.sh 880B
combine_A_and_B.py 2KB
make_dataset_aligned.py 2KB
openlibraryImageDownloaderMain.py 2KB
scraper_openlibrary.py 10KB
bibtex
facades.tex 253B
shoes.tex 461B
cityscapes.tex 393B
transattr.tex 341B
handbags.tex 515B
cityscapes_test_file_names.txt 19KB
__init__.py 0B
dataset.json 5KB
miyazaki_downloader.py 4KB
download_pix2pix_dataset.sh 529B
models
test_model.py 2KB
networks.py 40KB
base_model.py 6KB
__init__.py 1KB
pix2pix_model.py 4KB
cycle_gan_model.py 7KB
checkpoints
.gitignore 14B
scripts
download_cyclegan_model.sh 577B
eval_cityscapes
.gitignore 35B
setup.cfg 40B
license.txt 2KB
README.md 8KB
download_fcn8s.sh 217B
util.py 1KB
caffemodel
deploy.prototxt 11KB
evaluate.py 3KB
cityscapes.py 6KB
setup.py 2KB
download_pix2pix_model.sh 339B
test_cyclegan.sh 115B
test_before_push.py 3KB
test_single.sh 164B
test_colorization.sh 100B
train_colorization.sh 101B
edges
PostprocessHED.m 2KB
batch_hed.py 3KB
install_deps.sh 48B
conda_deps.sh 223B
test_pix2pix.sh 161B
train_cyclegan.sh 118B
train_pix2pix.sh 192B
requirements.txt 83B
data
single_dataset.py 1KB
base_data_loader.py 171B
aligned_dataset.py 2KB
base_dataset.py 3KB
__init__.py 2KB
unaligned_dataset.py 2KB
image_folder.py 2KB
util
cityscape_rename.py 564B
visualizer.py 8KB
util.py 2KB
get_data.py 3KB
image_pool.py 1KB
__init__.py 0B
html.py 2KB
共 81 条
- 1
李念遠
- 粉丝: 18
- 资源: 4615
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 2023-04-06-项目笔记 - 第一百五十四阶段 - 4.4.2.152全局变量的作用域-152 -2024.06.04
- 松哥解协议松哥解协议松哥解协议松哥解协议松哥解协议
- 618节日618节日618节日
- tensorflow-gpu-2.9.1-cp37-cp37m-win-amd64.whl
- tensorflow-gpu-2.9.0-cp37-cp37m-win-amd64.whl
- tensorflow-gpu-2.9.0-cp39-cp39-win-amd64.whl
- lcd daimalcd daima
- 电影领域-推荐算法-个性化内容-观影决策-电影推荐小程序.zip
- 电气控制PLC考试题库
- 如何使用MATLAB简介
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论2