# DeepKoopman
neural networks to learn Koopman eigenfunctions
Code for the paper ["Deep learning for universal linear embeddings of nonlinear dynamics"](https://www.nature.com/articles/s41467-018-07210-0) by Bethany Lusch, J. Nathan Kutz, and Steven L. Brunton
To run code:
1. Clone respository.
2. In the data directory, recreate desired dataset(s) by running DiscreteSpectrumExample, Pendulum, FluidFlowOnAttractor, and/or FluidFlowBox in Matlab (or download the datasets from Box [here](https://anl.box.com/s/9s29juzu892dfkhgxa1n1q4mj63nxabn)).
3. Back in the main directory, run desired experiment(s) with python.
Notes on running the Python experiments:
- A GPU is recommended but not required. The code can be run on a GPU or CPU without any changes.
- The paper contains results on the four datasets. These were the best results from running scripts that do a random parameter search (DiscreteSpectrumExampleExperiment.py, PendulumExperiment.py, FluidFlowOnAttractorExperiment.py, and FluidFlowBoxExperiment.py).
- To train networks using the specific parameters that produced the results in the paper instead of doing a parameter search, run DiscreteSpectrumExampleExperimentBestParams.py, PendulumExperimentBestParams.py, FluidFlowOnAttractorExperimentBestParams.py, and FluidFlowBoxExperimentBestParams.py.
- The experiment scripts include a loop over 200 random experiments (random parameters and random initializations of weights). You'll probably want to kill off the script earlier than that!
- Each random experiment can run up to params['max_time'] (in these experiments, 4 or 6 hours) but may be automatically terminated earlier if the error is not decreasing enough. If one experiment is not doing well, the script moves on to another random experiment.
- If the code decides to end an experiment, it saves the current results. It also saves every hour.
Postprocessing:
- You might want to use something like ./postprocessing/InvestigateResultsExample.ipynb to check out your results. Which of your models has the best validation error so far? How does validation error compare to your hyperparameter choices?
- To see what I did to dive into a particular trained deep learning model on a dataset, see the notebooks ./postprocessing/BestModel-DiscreteSpectrumExample.ipynb, ./postprocessing/BestModel-Pendulum.ipynb, etc. These notebooks also show how I calculated numbers and created figures for the paper.
New to deep learning? Here is some context:
- It is currently normal in deep learning to need to try a range of hyperparameters ("hyperparameter search"). For example: how many layers should your network have? How wide should each layer be? You try some options and pick the best result. (See next bullet point.) Further, the random initialization of your weights matters, so (unless you fix the seed of your random number generator) even with fixed hyperparameters, you can re-run your training multiple times and get different models with different errors. I didn't fix my seeds, so if you re-run my code multiple times, you can get different models and errors.
- It is standard to split your data into three sets: training, validation, and testing. You fit your neural network model to your training data. You only use the validation data to compare different models and choose the best one. The error on your validation data estimates how well your model will generalize to new data. You set aside the testing data even further. You only calcuate the error on the test data at the very end, after you've commited to a particular model. This should give a better estimate of how well your model will generalize, since you may have already heavily relied on your validation data when choosing a model.
## Citation
```
@article{lusch2018deep,
title={Deep learning for universal linear embeddings of nonlinear dynamics},
author={Lusch, Bethany and Kutz, J Nathan and Brunton, Steven L},
journal={Nature Communications},
volume={9},
number={1},
pages={4950},
year={2018},
publisher={Nature Publishing Group},
Doi = {10.1038/s41467-018-07210-0}
}
```
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
时空预测 ********************************************************************************* 融合深度Koopman和过程理解的强非线性时空预测方法 近年来,深度学习支持下的Koopman算子成为数学、计算机、物理等领 域有效解决这一问题的最佳方案。 将动力学从状态空间提升到可观测空间,能够在得到原动力系统非线性特性的同时保证系统不会丢 失任何信息,Koopman算子能够挖掘时空过程中的内在固有规律,并且能进一步利用这种规律对动态系统进行解释,这是统计学模型无法实现的。 本项目是地理信息科学、统计学、信息学多学科交叉研究,我们将融合深度Koopman算子和空间算 子,建立一套面向高维、强非线性时空过程的模拟与预测理论方法体系,希望这一交叉研究成果能为地理信息领域产生巨大的影响。 通过与大气环境污染、气象降水、城市流及土地利用变化等不同应用领域问题的实验验证,这一模型方法有望为多学科共性难题的解决带来新的途径。
资源推荐
资源详情
资源评论
收起资源包目录
koopman-master.zip (56个子文件)
koopman-master
文献
Highway
Data-driven analysis and forecasting of highway.pdf 5.7MB
DeepKoopma
Deep learning for universal linear embeddings of nonlinear dynamics.pdf 1.55MB
KoopmanAE
Forecasting Sequential Data Using Consistent Koopman Autoencoders.pdf 3.85MB
Koopman算法和应用研究
Koopman算符在一些动力系统中的算法和应用研究_贾继莹.caj 4.83MB
海温预测
海温预测 koopman-CNN.pdf 2.58MB
.keep 0B
代码
Highway
NGSIM_US101_Lane4_Density_Data.txt 485KB
NGSIM_US101_Density_Data.txt 878KB
NGSIM_US101_Lane678_Density_Data.txt 485KB
NGSIM_US80_5pm_Flow_Data.txt 456KB
InterpolateandFill.m 5KB
US101_Rain_Feb17_Velocity_Data.txt 1.52MB
Untitled.m 37B
NGSIM_US101_Lane1_Density_Data.txt 485KB
PlotMultiLaneNetwork.m 4KB
NGSIM_US101_Flow_Data.txt 878KB
ReadME_RunME_Analysis_Forecast.m 5KB
NGSIM_US101_Lane2_Density_Data.txt 485KB
LA_Multilane_Netwk_Dec2018_Density_Data.txt 3.82MB
NGSIM_US101_Lane3_Density_Data.txt 485KB
NGSIM_US80_5pm_Velocity_Data.txt 456KB
H_DMD.m 615B
NGSIM_US80_4pm_Density_Data.txt 228KB
I10_East_Week_Velocity_Data.txt 6.45MB
NGSIM_US80_5pm_Density_Data.txt 456KB
Convert_NGSIM.m 6KB
MovingHorizonHankelDMD.m 9KB
ReadMe_RunMe_GenerateSpatiotemporal.m 3KB
NGSIM_US80_4pm_Flow_Data.txt 228KB
NGSIM_US80_4pm_Velocity_Data.txt 228KB
Time_LA_Network.mat 1KB
GenerateKoopmanModes.m 12KB
Convert_NGSIM_MultiLane.m 6KB
NGSIM_US101_Velocity_Data.txt 878KB
NGSIM_US101_Lane5_Density_Data.txt 485KB
KoopmanAE
read_dataset.py 4KB
model.py 4KB
tools.py 1KB
driver.py 11KB
train.py 5KB
README.md 706B
plot_pred_error.py 2KB
training_parms.txt 2KB
DeepKoopman
PendulumExperiment.py 2KB
helperfns.py 22KB
training.py 13KB
LICENSE 1KB
PendulumExperimentBestParams.py 2KB
networkarch.py 20KB
FluidFlowBoxExperiment.py 2KB
DiscreteSpectrumExampleExperimentBestParams.py 2KB
DiscreteSpectrumExampleExperiment.py 3KB
FluidFlowOnAttractorExperiment.py 2KB
FluidFlowBoxExperimentBestParams.py 2KB
README.md 4KB
FluidFlowOnAttractorExperimentBestParams.py 2KB
共 56 条
- 1
资源评论
LeonDL168
- 粉丝: 2567
- 资源: 641
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 一个简单的库存管理系统,使用PHP、JavaScript、Bootstrap和CSS开发
- Python(Tkinter+matplotlib)实现光斑处理系统源代码
- HC32F4A0-v2.2.0-LittleVgl-8.3-1111.zip, 基于HC32F4A0的LVGL8.3工程
- 220913201郭博宇数据结构3.docx
- 小米R3G路由器breed专属
- MATLAB实现QRLSTM长短期记忆神经网络分位数回归时间序列区间预测(含完整的程序和代码详解)
- AN-HC32F4A0系列的外部存储器控制器EXMC -Rev1.1
- MATLAB实现QRBiGRU双向门控循环单元分位数回归时间序列区间预测(含完整的程序和代码详解)
- Firefox.apk
- 使用 PyTorch 实现 AlexNet 进行 MNIST 图像分类
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功