[![GitHub](https://img.shields.io/github/license/Marsrocky/Awesome-WiFi-CSI-Sensing?color=blue)](https://github.com/Marsrocky/Awesome-WiFi-CSI-Sensing/blob/main/LICENSE)
[![Maintenance](https://img.shields.io/badge/Maintained%3F-YES-green.svg)](https://github.com/Marsrocky/Awesome-WiFi-CSI-Sensing/graphs/commit-activity)
![Ask Me Anything !](https://img.shields.io/badge/Ask%20me-anything-1abc9c.svg)
[![DOI](https://zenodo.org/badge/511110383.svg)](https://zenodo.org/badge/latestdoi/511110383)
# SenseFi: A Benchmark for WiFi CSI Sensing
## Introduction
SenseFi is the first open-source benchmark and library for WiFi CSI human sensing, implemented by PyTorch. The state-of-the-art networks, including MLP, CNN, RNN, Transformers, etc, are evaluated on four public datasets across different WiFi CSI platforms. The details are illustrated in our paper [*Deep Learning and Its Applications to WiFi Human Sensing: A Benchmark and A Tutorial*](https://arxiv.org/abs/2207.07859).
```
@article{yang2022benchmark,
title={Deep Learning and Its Applications to WiFi Human Sensing: A Benchmark and A Tutorial},
author={Yang, Jianfei and Chen, Xinyan and Wang, Dazhuo and Zou, Han and Lu, Chris Xiaoxuan and Sun, Sumei and Xie, Lihua},
journal={arXiv preprint arXiv:2207.07859},
year={2022}
}
```
## Requirements
1. Install `pytorch` and `torchvision` (we use `pytorch==1.12.0` and `torchvision==0.13.0`).
2. `pip install -r requirements.txt`
## Run
### Download Processed Data
Please download and organize the [processed datasets](https://drive.google.com/drive/folders/1R0R8SlVbLI1iUFQCzh_mH90H_4CW2iwt?usp=sharing) in this structure:
```
Benchmark
├── Data
├── NTU-Fi_HAR
│ ├── test_amp
│ ├── train_amp
├── NTU-Fi-HumanID
│ ├── test_amp
│ ├── train_amp
├── UT_HAR
│ ├── data
│ ├── label
├── Widardata
│ ├── test
│ ├── train
```
We also offer [pre-trained weights](https://drive.google.com/drive/folders/1NBVe9za8ntFnkE9B1vhv4gD6eM88P1KI?usp=sharing) for all models
### Supervised Learning
To run models with supervised learning (train & test):
Run: `python run.py --model [model name] --dataset [dataset name]`
You can choose [model name] from the model list below
- MLP
- LeNet
- ResNet18
- ResNet50
- ResNet101
- RNN
- GRU
- LSTM
- BiLSTM
- CNN+GRU
- ViT
You can choose [dataset name] from the dataset list below
- UT_HAR_data
- NTU-Fi-HumanID
- NTU-Fi_HAR
- Widar
*Example: `python run.py --model ResNet18 --dataset NTU-Fi_HAR`*
### Unsupervised Learning
To run models with unsupervised (self-supervised) learning (train on **NTU-Fi HAR** & test on **NTU-Fi HumanID**):
Run: `python self_supervised.py --model [model name] `
You can choose [model name] from the model list below
- MLP
- LeNet
- ResNet18
- ResNet50
- ResNet101
- RNN
- GRU
- LSTM
- BiLSTM
- CNN+GRU
- ViT
*Example: `python self_supervised.py --model MLP`*
Method: [*AutoFi: Towards Automatic WiFi Human Sensing via Geometric Self-Supervised Learning*](https://doi.org/10.48550/arXiv.2205.01629)
## Model Zoo
### MLP
- It consists of 3 fully-connected layers followed by activation functions
### LeNet
- **self.encoder** : It consists of 3 convolutional layers followed by activation functions and Maxpooling layers to learn features
- **self.fc** : It consists of 2 fully-connected layers followed by activation functions for classification
### ResNet
- ***class*** **Bottleneck** : Each bottleneck consists of 3 convolutional layers followed by batch normalization operation and activation functions. And adds resudual connection within the bottleneck
- ***class*** **Block** : Each block consists of 2 convolutional layers followed by batch normalization operation and activation functions. And adds resudual connection within the block
- **self.reshape** : Reshape the input size into the size of 3 x 32 x 32
- **self.fc** : It consists of a fully-connected layer
### RNN
- **self.rnn** : A one-layer RNN structure with a hidden dimension of 64
- **self.fc** : It consists of a fully-connected layer
### GRU
- **self.gru** : A one-layer GRU structure with a hidden dimension of 64
- **self.fc** : It consists of a fully-connected layer
### LSTM
- **self.lstm** : A one-layer LSTM structure with a hidden dimension of 64
- **self.fc** : It consists of a fully-connected layer
### BiLSTM
- **self.lstm** : A one-layer bidirectional LSTM structure with a hidden dimension of 64
- **self.fc** : It consists of a fully-connected layer
### CNN+GRU
- **self.encoder** : It consists of 3 convolutional layers followed by activation functions
- **self.gru** : A one-layer GRU structure with a hidden dimension of 64
- **self.classifier** : It consistis a dropout layer followed by a fully-connected layer and an activation function
### ViT (Transformers)
- ***class*** **PatchEmbedding** : Divide the 2D inputs into small pieces of equal size. Then concatenate each piece with cls_token and do positional encoding operation
- ***class*** **ClassificationHead** : It consists of a layer-normalization layer followed by a fully-connected layer
- ***class*** **TransformerEncoderBlock** : It consists of multi-head attention block, residual add block and feed forward block. The structure is shown below:
<img src="./img/transformer_block.jpg" width="200"/>
## Dataset
#### UT-HAR
[*A Survey on Behavior Recognition Using WiFi Channel State Information*](https://ieeexplore.ieee.org/document/8067693) [[Github]](https://github.com/ermongroup/Wifi_Activity_Recognition)
- **CSI size** : 1 x 250 x 90
- **number of classes** : 7
- **classes** : lie down, fall, walk, pickup, run, sit down, stand up
- **train number** : 3977
- **test number** : 996
#### NTU-HAR
[*Efficientfi: Towards Large-Scale Lightweight Wifi Sensing via CSI Compression*](https://ieeexplore.ieee.org/document/9667414)
- **CSI size** : 3 x 114 x 500
- **number of classes** : 6
- **classes** : box, circle, clean, fall, run, walk
- **train number** : 936
- **test number** : 264
#### NTU-HumanID
[*CAUTION: A Robust WiFi-based Human Authentication System via Few-shot Open-set Gait Recognition*](https://ieeexplore.ieee.org/abstract/document/9726794)
- **CSI size** : 3 x 114 x 500
- **number of classes** : 14
- **classes** : gaits of 14 subjects
- **train number** : 546
- **test number** : 294
*Examples of NTU-Fi data*
<img src="./img/CSI_samples.jpg" width="1000"/>
#### Widar
[*Widar3.0: Zero-Effort Cross-Domain Gesture Recognition with Wi-Fi*](https://ieeexplore.ieee.org/document/9516988) [[Project]](http://tns.thss.tsinghua.edu.cn/widar3.0/)
- **BVP size** : 22 x 20 x 20
- **number of classes** : 22
- **classes** :
Push&Pull, Sweep, Clap, Slide, Draw-N(H), Draw-O(H),Draw-Rectangle(H),
Draw-Triangle(H), Draw-Zigzag(H), Draw-Zigzag(V), Draw-N(V), Draw-O(V), Draw-1,
Draw-2, Draw-3, Draw-4, Draw-5, Draw-6, Draw-7, Draw-8, Draw-9, Draw-10
- **train number** : 34926
- **test number** : 8726
*Classes of Widar data*
<img src="./img/Widar_classes.jpg" width="800"/>
#### Notice
- Please download and unzip all the datasets with Linux system in order to avoid decoding errors.
- For UT-HAR, the data file is csv format and can be loaded via our code. If you use Excel to open it, it is not readable due to the encoding format derived from the original dataset.
## Datasets Reference
```
@article{yousefi2017survey,
title={A survey on behavior recognition using WiFi channel state information},
author={Yousefi, Siamak and Narui, Hirokazu and Dayal, Sankalp and Ermon, Stefano and Valaee, Shahrokh},
journal={IEEE Communications Magazine},
volume={55},
number={10},
pages={98--104},
year={2017},
publisher={IEEE}
}
@article{yang2022autofi,
title={AutoFi: Towards Automatic WiFi Human Sensing via Geometric Self-Supervised Learning},
author={Yang, Jianfei and Chen, Xinyan and Zou, Han and Wang, Dazhuo and Xie, L
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
matlab实现基于WIFI-CSI的人体姿态检测源码+文档说明(高分项目).zip本资源中的源码都是经过本地编译过可运行的,评审分达到95分以上。资源项目的难度比较适中,内容都是经过助教老师审定过的能够满足学习、使用需求,如果有需要的话可以放心下载使用。 matlab实现基于WIFI-CSI的人体姿态检测源码+文档说明(高分项目).zip本资源中的源码都是经过本地编译过可运行的,评审分达到95分以上。资源项目的难度比较适中,内容都是经过助教老师审定过的能够满足学习、使用需求,如果有需要的话可以放心下载使用。 matlab实现基于WIFI-CSI的人体姿态检测源码+文档说明(高分项目).zip本资源中的源码都是经过本地编译过可运行的,评审分达到95分以上。资源项目的难度比较适中,内容都是经过助教老师审定过的能够满足学习、使用需求,如果有需要的话可以放心下载使用。 matlab实现基于WIFI-CSI的人体姿态检测源码+文档说明(高分项目).zip本资源中的源码都是经过本地编译过可运行的,评审分达到95分以上。资源项目的难度比较适中,内容都是经过助教老师审定过的能够满足学习、使用需求,
资源推荐
资源详情
资源评论
收起资源包目录
matlab实现基于WIFI-CSI的人体姿态检测源码+文档说明(高分项目).zip (326个子文件)
svm_example.asv 17KB
falldefi_example.asv 2KB
fall_interp_dat_to_mat_lab2.asv 914B
sleeping_post_1597163585.dat 443KB
brushteeth_post_1597163619.dat 223KB
walk_post_1597163546.dat 213KB
xbpm.dat 157KB
88bpm.dat 127KB
84bpm.dat 125KB
cook_1590161749.dat 116KB
getintobed_post_1597163564.dat 113KB
brushteeth_1597160161.dat 112KB
brushteeth_1597160009.dat 112KB
washingdishes_1590160993.dat 112KB
brushteeth_1590158650.dat 110KB
walk_1597159688.dat 108KB
cook_1597161029.dat 108KB
washingdishes_1597160719.dat 108KB
brushteeth_1597160013.dat 108KB
washingdishes_1597160715.dat 108KB
brushteeth_1597160017.dat 108KB
washingdishes_1597160711.dat 108KB
brushteeth_1597159877.dat 108KB
washingdishes_1590160997.dat 102KB
brushteeth_1590158472.dat 101KB
brushteeth_1590158654.dat 94KB
getintobed_post_1597163572.dat 94KB
washingdishes_1590160990.dat 81KB
brushteeth_1590158645.dat 77KB
75bpm.dat 73KB
hometest9.dat 71KB
hometest8.dat 69KB
hometest10.dat 68KB
hometest4.dat 68KB
hometest6.dat 67KB
hometest5.dat 67KB
hometest2.dat 67KB
hometest7.dat 67KB
hometest1.dat 66KB
hometest3.dat 66KB
3breaths.dat 66KB
3breaths.dat 66KB
71bpm.dat 63KB
66bpm.dat 44KB
walk_1590161182.dat 41KB
90bpm.dat 41KB
73bpm.dat 20KB
sample_bigEndian.dat 16KB
log.all_csi.6.7.6.dat 11KB
csi2.dat 10KB
testfile.dat 10KB
testfile.dat 10KB
76bpm.dat 3KB
csi.dat 1KB
92bpm.dat 0B
手册.docx 53KB
CSI_samples.jpg 591KB
Widar_classes.jpg 423KB
Models.jpg 309KB
transformer_block.jpg 109KB
LICENSE 1KB
Link to the data set 71B
time_freq_processing_all_events_hmm.m 24KB
fall_data_extract.m 22KB
time_freq_processing_all_events.m 18KB
svm_example.m 17KB
fall_interp_dat_to_mat.m 16KB
svm_example_across_rooms.m 16KB
time_freq_processing.m 16KB
amp_denoise.m 13KB
rtfall_implementation.m 13KB
spectrogram_sam.m 10KB
fall_data_denoise.m 10KB
random_forest_example.m 7KB
extract_CSI_from_TX_antenna.m 7KB
randomforest_example_across_rooms.m 6KB
fixPCs_fall data.m 5KB
PCA_denoised_falldata.m 5KB
fall_data_STFT.m 4KB
multinomial_log_reg.m 4KB
fall_lab2_data_extract.m 3KB
fall_kitchen_data_extract.m 3KB
fall_corridor_data_extract.m 3KB
fall_bathroom_data_extract.m 3KB
naive_bayes.m 3KB
fall_bathroom2_data_extract.m 3KB
fall_corridor2_2_data_extract.m 3KB
fall_bedrooms2_data_extract.m 3KB
fall_kitchen2_data_extract.m 3KB
entropy_ts.m 3KB
fall_kitchen2me_data_extract.m 3KB
fall_bedrooms_data_extract.m 3KB
fall_corridor2_data_extract.m 3KB
CARM_stft.m 3KB
fall_kitchen2other_data_extract.m 2KB
extract_phase_interp.m 2KB
fall_lab2me_data_extract.m 2KB
center_data.m 2KB
BoxCountfracDim.m 2KB
func_denoise_sw1d.m 2KB
共 326 条
- 1
- 2
- 3
- 4
资源评论
盈梓的博客
- 粉丝: 7121
- 资源: 1797
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功