![Image](./common/dannce_logo.png)
Repository Contributors: Timothy Dunn, Jesse Marshall, Diego Aldarondo, William Wang, Kyle Severson
DANNCE (3-Dimensional Aligned Neural Network for Computational Ethology) is a convolutional neural network (CNN) that calculates the 3D positions of user-defined anatomical landmarks on behaving animals from videos taken at multiple angles. The key innovation of DANNCE compared to existing approaches for 2D keypoint detection in animals (e.g. [LEAP](https://github.com/talmo/leap), [DeepLabCut](https://github.com/DeepLabCut/DeepLabCut)) is that the network is fully 3D, so that it can learn about 3D image features and how cameras and landmarks relate to one another in 3D space. We also pre-trained DANNCE using a large dataset of rat motion capture and synchronized video, so the standard network has extensive prior knowledge of rodent motions and poses. DANNCE's ability to track landmarks transfers well to mice and other mammals, and works across different camera views, camera types, and illumination conditions.
![Image](./common/Figure1.png)
## Example Results
#### Mouse
![Image](./common/fig3.gif)
#### Rat
![Image](./common/rat_JDM52.gif)
## DANNCE Installation
`DANNCE` requires a CUDA-enabled GPU and appropriate drivers. We have tested DANNCE on NVIDIA GPUs including the Titan V, Titan X Pascal, Titan RTX, V100, and Quadro P5000. On an NVIDIA Titan V, DANNCE can make predictions from ~10.5 frames per second from 6 camera views. DANNCE is also embarrassingly parallel over multiple GPUs. The following combinations of operating systems and software have been tested:
| OS | Python | TensorFlow | CUDA | cuDNN | PyTorch |
|:---------------------:|:------:|:----------:|:----:|:-----:|:-------:|
| Ubuntu 16.04 or 18.04 | 3.7.x | 2.2.0 - 2.3.0 | 10.1 | 7.6 | 1.5.0 - 1.7.0 |
| Windows 10 | 3.7.x | 2.2.0 - 2.3.0 | 10.1 | 7.6 | 1.5.0 - 1.7.0 |
We recommend installing DANNCE using the following steps:
1. Clone the github repository
```
git clone --recursive https://github.com/spoonsso/dannce
cd dannce
```
2. If you do not already have it, install [Anaconda](https://www.anaconda.com/products/individual).
3. Set up a new Anaconda environment with the following configuration: \
`conda create -n dannce python=3.7 cudatoolkit=10.1 cudnn ffmpeg`
4. Activate the new Anaconda environment: \
`conda activate dannce`
5. Install PyTorch: \
`conda install pytorch=1.7 -c pytorch`
6. Update setuptools: \
`pip install -U setuptools`
7. Install DANNCE with the included setup script from within the base repository directory: \
`pip install -e .`
Then you should be ready to try the quickstart demo! \
These installation steps were tested with Anaconda releases 4.7.12 and 2020.02, although we expect it to work for most conda installations. After installing Anaconda, and presuming there are no issues with GPU drivers, the installation should take less than 5 minutes.
A note on the PyTorch requirment.
PyTorch is not required, but 3D volume generation is significantly faster when using PyTorch than with TensorFlow or NumPy. To use TensorFlow only, without having to install the PyTorch package, simply toggle the `predict_mode` field in the DANNCE configuration files to `tf`. To use NumPy volume generation (slowest), change `predict_mode` to `None`.
## Quickstart Demo
To test your DANNCE installation and familiarize yourself with DANNCE file and configuration formatting, run DANNCE predictions for `markerless_mouse_1`. Because the videos and network weight files are too large to host on GitHub, use the links in these files to download necessary files and place them in each associated location:
```
demo/markerless_mouse_1/DANNCE/train_results/link_to_weights.txt
demo/markerless_mouse_1/DANNCE/train_results/AVG/link_to_weights.txt
demo/markerless_mouse_1/videos/link_to_videos.txt
demo/markerless_mouse_2/videos/link_to_videos.txt
```
Alternatively, on Linux you can run the following commands from the base `dannce` repository.
For markerless_mouse_1:
```
wget -O vids.zip https://tinyurl.com/DANNCEmm1vids;
unzip vids.zip -d vids;
mv vids/* demo/markerless_mouse_1/videos/;
rm -r vids vids.zip;
wget -O demo/markerless_mouse_1/DANNCE/train_results/weights.12000-0.00014.hdf5 https://tinyurl.com/DANNCEmm1weightsBASE;
wget -O demo/markerless_mouse_1/DANNCE/train_results/AVG/weights.1200-12.77642.hdf5 https://tinyurl.com/DANNCEmm1weightsAVG
```
For markerless_mouse_2:
```
wget -O vids2.zip https://tinyurl.com/DANNCEmm2vids;
unzip vids2.zip -d vids2;
mv vids2/* demo/markerless_mouse_2/videos/;
rm -r vids2 vids2.zip
```
Once the files are downloaded and placed, run:
```
cd demo/markerless_mouse_1/;
dannce-predict ../../configs/dannce_mouse_config.yaml
```
This demo will run the `AVG` version of DANNCE over 1000 frames of mouse data and save the results to: \
`demo/markerless_mouse_1/DANNCE/predict_results/save_data_AVG.mat`
The demo should take less than 2 minutes to run on an NVIDIA Titan X, Titan V, Titan RTX, or V100. Run times may be slightly longer on less powerful GPUs. The demo has not been tested using a CPU only.
Please see the *Wiki* for more details on running DANNCE and customizing configuration files.
## Using DANNCE on your data
### Camera Calibration
To use DANNCE, acquisition cameras must calibrated. Ideally, the acquired data will also be compressed. Synchronization is best done with a frametime trigger and a supplementary readout of frame times. Calibration is the process of determining the distortion introduced into an image from the camera lens (camera intrinsics) and the position and orientation of cameras relative to one another in space (camera extrinsics). When acquiring our data, we typically calibrated cameras in a two-step process. We first used a checkerboard to find the camera intrinsics. We then used an 'L-frame' to determine the camera extrinsics. The L-frame is a calibrated grid of four or more points that are labeled in each camera. A checkerboard can also be used for both procedures. We have included two examples of calibration using MATLAB (in `Calibration/`).
Some tips:
1. Try to sample the whole volume of the arena with the checkerboard to fully map the distortion of the lenses.
2. If you are using a confined arena (e.g. a plexiglass cylinder) that is hard to wand, it often works to compute the calibration without the cylinder present.
3. More complicated L-Frames can be used, and can help, for computing the extrinsics. Sometimes using only a four point co-planar L-frame can result in a 'flipped' camera, so be sure to check camera poses after calibration.
It is often helpful to compress videos as they are acquired to reduce diskspace needed for streaming long recordings from multiple cameras. This can be done using ffmpeg or x264, and we have included two example scripts in `Compression/`. One, `campy.py`, was written by Kyle Severson and runs ffmpeg compression on a GPU for streaming multiple Basler cameras. A second, CameraCapture, was originally written by Raj Poddar and uses x264 on the CPU to stream older Point Grey/FLIR cameras (eg Grasshopper, Flea3). We have included both a compiled version of the program and the original F-Sharp code that can be edited in Visual Studio.
*Mirrors.* Mirrors are a handy way to create new views, but there are some important details when using them with DANNCE. The easiest way to get it all to work with the dannce pipeline is to create multiple videos from the video with mirrors, with all but one sub-field of view (FOV) blacked out in each video. This plays well with the center-of-mass finding network, which currently expects to find only one animal in a given frame.
When calibrating the mirror setup, we have used one intrinsic parameter calibration over the entire FOV of the camera, typically by moving the experimental setup away from the camera (moving the camera could cause changes in the in
没有合适的资源?快使用搜索试试~ 我知道了~
用神经网络确定权重的matlab代码-dannce:丹斯
共134个文件
mat:29个
py:25个
m:15个
需积分: 45 14 下载量 86 浏览量
2021-05-24
00:50:25
上传
评论 3
收藏 595.24MB ZIP 举报
温馨提示
用神经网络确定权重的matlab代码 存储库贡献者:Timothy Dunn,Jesse Marshall,Diego Aldarondo,William Wang,Kyle Severson DANNCE(用于计算人类学的3维对齐神经网络)是一种卷积神经网络(CNN),可从多角度拍摄的视频中计算行为动物上用户定义的解剖学界标的3D位置。 与现有的动物(例如)2D关键点检测方法相比,DANNCE的关键创新在于该网络是完全3D的,因此它可以了解3D图像特征以及相机和地标在3D空间中的相互关系。 我们还使用大鼠运动捕捉和同步视频的大型数据集对DANNCE进行了预训练,因此标准网络对啮齿动物的运动和姿势具有广泛的先验知识。 DANNCE跟踪地标的能力可以很好地转移到小鼠和其他哺乳动物,并且可以在不同的摄像机视角,摄像机类型和照明条件下工作。 示例结果 鼠 鼠 DANNCE安装 DANNCE需要启用CUDA的GPU和适当的驱动程序。 我们已经在NVIDIA GPU(包括Titan V,Titan X Pascal,Titan RTX,V100和Quadro P5000)上测试了DANNCE。
资源详情
资源评论
资源推荐
收起资源包目录
用神经网络确定权重的matlab代码-dannce:丹斯 (134个子文件)
videofiles_run11.avi 14.42MB
CameraCapture.exe.config 182B
FSharp.Core.dll 1.34MB
FlyCapture2Managed_v100.dll 304KB
MFWebcam.dll 16KB
empty 26B
x264.exe 7.64MB
CameraCapture.exe 34KB
ICubeCam.fs 2KB
PGRFlyCap.fs 2KB
VideoAgent.fs 2KB
WebCam.fs 1KB
Main.fs 1KB
fig3.gif 79.8MB
KyleMouse_longfast.gif 64.81MB
KyleMouse_shortslow.gif 28.98MB
rat_JDM52.gif 25.81MB
.gitattributes 1KB
.gitignore 2KB
.gitmodules 168B
weights.250-0.00036.hdf5 89.19MB
weights.rat.AVG.6cam.hdf5 85.68MB
weights.rat.MAX.6cam.hdf5 85.66MB
weights.rat.AVG.MONO.6cam.hdf5 85.6MB
weights.rat.COM.hdf5 29.8MB
LICENSE 1KB
estimateCameraParametersPar.m 22KB
estimateCameraParametersPar.m 22KB
detectCheckerboardPointsPar.m 19KB
detectCheckerboardPointsPar.m 19KB
acquire_calibration_3cam_mouse_clean.m 13KB
calibration_extrinsic_Lframe.m 6KB
calibration_extrinsic_Lframe.m 6KB
calibration_intrinsic_checkerboard.m 4KB
calibration_intrinsic_checkerboard.m 4KB
create_matched_frames.m 2KB
create_matched_frames.m 2KB
preprocessData.m 1KB
paramCams.m 743B
paramCams.m 743B
convertCalibration.m 617B
label3d_dannce.mat 12.04MB
save_data_MAX_torch_pathonly.mat 5.4MB
exampleMatchedframes.mat 5.15MB
save_data_MAX_torch.mat 1.35MB
save_data_MAX_master.mat 1.35MB
save_data_MAX_none.mat 1.35MB
save_data_MAX_none_linear.mat 1.35MB
save_data_MAX_torchlinear_newtf_border.mat 1.35MB
save_data_MAX_torchborder_2Dreshape.mat 1.35MB
save_data_MAX_torchnearest_border_aligncorners.mat 1.35MB
save_data_MAX_tf.mat 1.35MB
save_data_MAX_torchborder.mat 1.35MB
save_data_MAX_torchnearest_newtfroutine.mat 1.35MB
save_data_MAX_torchlinear.mat 1.35MB
save_data_MAX_torch_inear_newtf_zeroed.mat 1.35MB
save_data_AVG_torch_nearest.mat 1.18MB
label3d_dannce.mat 1.18MB
label3d_dannce.mat 1.11MB
label3d_demo.mat 813KB
label3d_demo.mat 799KB
label3d_temp_dannce.mat 799KB
alabel3d_temp_dannce.mat 799KB
save_data_MAX.mat 554KB
save_data_MAX_torch_aligncorners.mat 139KB
COM3D_undistorted_masternn.mat 31KB
COM3D_undistorted_master.mat 31KB
COM3D_undistorted_torchnn.mat 31KB
COM3D_undistorted_torchbranch.mat 31KB
com3d.mat 9KB
README.md 13KB
CameraCapture.pdb 52KB
COM_undistorted.pickle 38.56MB
com3d.pickle 215KB
com3d.pickle 215KB
image (2).png 1.22MB
Figure1.png 1.22MB
camerareproject1.png 638KB
dannce_logo.png 29KB
ioyaml.png 19KB
Profile.prx 4KB
generator.py 70KB
interface.py 62KB
nets.py 43KB
processing.py 38KB
ops.py 38KB
generator_aux.py 21KB
cli.py 18KB
serve_data_DANNCE.py 14KB
makeStructuredData.py 10KB
makeStructuredData_DLC.py 8KB
makeStructuredDataNoMocap.py 5KB
losses.py 4KB
makeSyncFiles.py 4KB
plot2DProjection.py 3KB
__init__.py 2KB
io.py 2KB
loadStructs.py 1KB
extractEmbeddedModel.py 1KB
multigpuToSinglegpu.py 837B
共 134 条
- 1
- 2
weixin_38637764
- 粉丝: 10
- 资源: 953
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论0