# Neural Granger Causality
The `Neural-GC` repository contains code for a deep learning-based approach to discovering Granger causality networks in multivariate time series. The methods implemented here are described in [this paper](https://arxiv.org/abs/1802.05842).
## Installation
To install the code, please clone the repository. All you need is `Python 3`, `PyTorch (>= 0.4.0)`, `numpy` and `scipy`.
## Usage
See examples of how to apply our approach in the notebooks `cmlp_lagged_var_demo.ipynb`, `clstm_lorenz_demo.ipynb`, and `crnn_lorenz_demo.ipynb`.
## How it works
The models implemented in this repository, called the cMLP, cLSTM and cRNN, are neural networks that model multivariate time series by forecasting each time series separately. During training, sparse penalties on the input layer's weight matrix set groups of parameters to zero, which can be interpreted as discovering Granger non-causality.
The cMLP model can be trained with three different penalties: group lasso, group sparse group lasso, and hierarchical. The cLSTM and cRNN models both use a group lasso penalty, and they differ from one another only in the type of RNN cell they use.
Training models with non-convex loss functions and non-smooth penalties requires a specialized optimization strategy, and we use a proximal gradient descent approach (ISTA). Our paper finds that ISTA provides comparable performance to two other approaches: proximal gradient descent with a line search (GISTA), which guarantees convergence to a local minimum, and Adam, which converges faster (although it requires an additional thresholding parameter).
## Other information
- Selecting the right regularization strength can be difficult and time consuming. To get results for many regularization strengths, you may want to run parallel training jobs or use a warm start strategy.
- Pretraining (training without regularization) followed by ISTA can lead to a different result than training directly with ISTA. Given the non-convex objective function, this is unsurprising, because the initialization from pretraining is very different than a random initialization. You may need to experiment to find what works best for you.
- If you want to train a debiased model with the learned sparsity pattern, use the `cMLPSparse`, `cLSTMSparse`, and `cRNNSparse` classes.
## Authors
- Ian Covert (<icovert@cs.washington.edu>)
- Alex Tank
- Nicholas Foti
- Ali Shojaie
- Emily Fox
## References
- Alex Tank, Ian Covert, Nicholas Foti, Ali Shojaie, Emily Fox. "Neural Granger Causality." *Transactions on Pattern Analysis and Machine Intelligence*, 2021.
没有合适的资源?快使用搜索试试~ 我知道了~
Neural-GC:神经网络的格兰杰因果关系发现
共12个文件
py:6个
ipynb:3个
gitattributes:1个
需积分: 49 7 下载量 22 浏览量
2021-05-19
19:40:58
上传
评论 1
收藏 1.3MB ZIP 举报
温馨提示
神经格兰杰因果关系 Neural-GC存储库包含用于在多元时间序列中发现Granger因果网络的基于深度学习的方法的代码。 介绍了此处实现的方法。 安装 要安装代码,请克隆存储库。 您只需要Python 3 , PyTorch (>= 0.4.0) , numpy和scipy 。 用法 在笔记本cmlp_lagged_var_demo.ipynb , clstm_lorenz_demo.ipynb和crnn_lorenz_demo.ipynb查看有关如何应用我们的方法的crnn_lorenz_demo.ipynb 。 这个怎么运作 在此存储库中实现的模型称为cMLP,cLSTM和cRNN,是通过分别预测每个时间序列对多元时间序列进行建模的神经网络。 在训练期间,对输入层权重矩阵的稀疏惩罚会将参数组设置为零,这可以解释为发现格兰杰非因果关系。 可以使用三种不同的惩罚训练cMLP模型:组套索
资源详情
资源评论
资源推荐
收起资源包目录
Neural-GC-master.zip (12个子文件)
Neural-GC-master
models
clstm.py 18KB
__init__.py 85B
model_helper.py 466B
crnn.py 18KB
cmlp.py 19KB
LICENSE 1KB
README.md 3KB
crnn_lorenz_demo.ipynb 392KB
synthetic.py 2KB
cmlp_lagged_var_demo.ipynb 1.02MB
.gitattributes 33B
clstm_lorenz_demo.ipynb 417KB
共 12 条
- 1
dahiod
- 粉丝: 28
- 资源: 4665
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论0