<div align="center">
<img src="https://github.com/eserie/wax-ml/blob/main/docs/_static/wax_logo.png" alt="logo" width="40%"></img>
</div>
# WAX-ML: A Python library for machine-learning and feedback loops on streaming data
![Continuous integration](https://github.com/eserie/wax-ml/actions/workflows/tests.yml/badge.svg)
[![Documentation Status](https://readthedocs.org/projects/wax-ml/badge/?version=latest)](https://wax-ml.readthedocs.io/en/latest/)
[![PyPI version](https://badge.fury.io/py/wax-ml.svg)](https://badge.fury.io/py/wax-ml)
[![Codecov](https://codecov.io/gh/eserie/wax-ml/branch/main/graph/badge.svg)](https://codecov.io/gh/eserie/wax-ml)
[![Black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/ambv/black)
[**Quickstart**](#quickstart-colab-in-the-cloud)
| [**Install guide**](#installation)
| [**Change logs**](https://wax-ml.readthedocs.io/en/latest/changelog.html)
| [**Reference docs**](https://wax-ml.readthedocs.io/en/latest/)
## Introduction
ð Wax is what you put on a surfboard to avoid slipping. It is an essential tool to go
surfing ... ð
WAX-ML is a research-oriented [Python](https://www.python.org/) library
providing tools to design powerful machine learning algorithms and feedback loops
working on streaming data.
It strives to complement [JAX](https://jax.readthedocs.io/en/latest/)
with tools dedicated to time series.
WAX-ML makes JAX-based programs easy to use for end-users working
with
[pandas](https://pandas.pydata.org/) and [xarray](http://xarray.pydata.org/en/stable/)
for data manipulation.
WAX-ML provides a simple mechanism for implementing feedback loops, allows the implementation of
reinforcement learning algorithms with functions, and makes them easy to integrate by
end-users working with the object-oriented reinforcement learning framework from the
[Gym](https://gym.openai.com/) library.
To learn more, you can read our [article on ArXiv](http://arxiv.org/abs/2106.06524)
or simply access the code in this repository.
## WAX-ML Goal
WAX-ML's goal is to expose "traditional" algorithms that are often difficult to find in standard
Python ecosystem and are related to time-series and more generally to streaming data.
It aims to make it easy to work with algorithms from very various computational domains such as
machine learning, online learning, reinforcement learning, optimal control, time-series analysis,
optimization, statistical modeling.
For now, WAX-ML focuses on **time-series** algorithms as this is one of the areas of machine learning
that lacks the most dedicated tools. Working with time series is notoriously known to be difficult
and often requires very specific algorithms (statistical modeling, filtering, optimal control).
Even though some of the modern machine learning methods such as RNN, LSTM, or reinforcement learning
can do an excellent job on some specific time-series problems, most of the problems require using
more traditional algorithms such as linear and non-linear filters, FFT,
the eigendecomposition of matrices (e.g. [[7]](#references)),
principal component analysis (PCA) (e.g. [[8]](#references)), Riccati solvers for
optimal control and filtering, ...
By adopting a functional approach, inherited from JAX, WAX-ML aims to be an efficient tool to
combine modern machine learning approaches with more traditional ones.
Some work has been done in this direction in [[2] in References](#references) where transformer encoder
architectures are massively accelerated, with limited accuracy costs, by replacing the
self-attention sublayers with a standard, non-parameterized Fast Fourier Transform (FFT).
WAX-ML may also be useful for developing research ideas in areas such as online machine learning
(see [[1] in References](#references)) and development of control, reinforcement learning,
and online optimization methods.
## What does WAX-ML do?
Well, building WAX-ML, we have some pretty ambitious design and implementation goals.
To do things right, we decided to start small and in an open-source design from the beginning.
For now, WAX-ML contains:
- transformation tools that we call "unroll" transformations allowing us to
apply any transformation, possibly stateful, on sequential data. It generalizes the RNN
architecture to any stateful transformation allowing the implementation of any kind of "filter".
- a "stream" module, described in [ð Streaming Data ð](#-streaming-data-), permitting us to
synchronize data streams with different time resolutions.
- some general pandas and xarray "accessors" permitting the application of any
JAX-functions on pandas and xarray data containers:
`DataFrame`, `Series`, `Dataset`, and `DataArray`.
- ready-to-use exponential moving average filter that we exposed with two APIs:
- one for JAX users: as Haiku modules (`EWMA`, ... see the complete list in our
[API documentation](https://wax-ml.readthedocs.io/en/latest/wax.modules.html)
).
- a second one for pandas and xarray users: with drop-in replacement of pandas
`ewm` accessor.
- a simple module `OnlineSupervisedLearner` to implement online learning algorithms
for supervised machine learning problems.
- building blocks for designing feedback loops in reinforcement learning, and have
provided a module called `GymFeedback` allowing the implementation of feedback loop as the
introduced in the library [Gym](https://gym.openai.com/), and illustrated this figure:
<div align="center">
<img src="docs/tikz/gymfeedback.png" alt="logo" width="60%"></img>
</div>
### What is JAX?
JAX is a research-oriented computational system implemented in Python that leverages the
XLA optimization framework for machine learning computations. It makes XLA usable with
the NumPy API and some functional primitives for just-in-time compilation,
differentiation, vectorization, and parallelization. It allows building higher-level
transformations or "programs" in a functional programming approach.
See [JAX's page](https://github.com/google/jax) for more details.
## Why to use WAX-ML?
If you deal with time-series and are a pandas or xarray user, b
ut you want to use the impressive
tools of the JAX ecosystem, then WAX-ML might be the right tool for you,
as it implements pandas and
xarray accessors to apply JAX functions.
If you are a user of JAX, you may be interested in adding WAX-ML to your toolbox to address
time-series problems.
## Design
### Research oriented
WAX-ML is a research-oriented library. It relies on
[JAX](https://jax.readthedocs.io/en/latest/notebooks/quickstart.html) and
[Haiku](https://github.com/deepmind/dm-haiku) functional programming paradigm to ease the
development of research ideas.
WAX-ML is a bit like [Flux](https://fluxml.ai/Flux.jl/stable/)
in [Julia](https://julialang.org/) programming language.
### Functional programming
In WAX-ML, we pursue a functional programming approach inherited from JAX.
In this sense, WAX-ML is not a framework, as most object-oriented libraries offer. Instead, we
implement "functions" that must be pure to exploit the JAX ecosystem.
### Haiku modules
We use the "module" mechanism proposed by the Haiku library to easily generate pure function pairs,
called `init` and `apply` in Haiku, to implement programs that require the management of
parameters and/or state variables.
You can see
[the Haiku module API](https://dm-haiku.readthedocs.io/en/latest/api.html#modules-parameters-and-state)
and
[Haiku transformation functions](https://dm-haiku.readthedocs.io/en/latest/api.html#haiku-transforms)
for more details.
In this way, we can recover all the advantages of
object-oriented programming but exposed in the functional programming approach.
It permits to ease the development of robust and reusable features and to
develop "mini-languages" tailored to specific scientific domains.
### WAX-ML works with other libraries
We want existing machine learning libraries to work well together while try
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
matlab资源 用于流数据的机器学习和反馈循环的 Python 库 仅供学习参考用代码.zip (189个子文件)
AUTHORS 315B
make.bat 787B
setup.cfg 4KB
custom.css 822B
style.css 77B
gymfeedback.synctex.gz 3KB
layout.html 85B
MANIFEST.in 36B
08_Online_learning_in_non_stationary_environments.ipynb 1.25MB
05_reconstructing_the_light_curve_of_stars.ipynb 1.02MB
07_Online_Time_Series_Prediction.ipynb 676KB
02_Synchronize_data_streams.ipynb 164KB
01_demo_EWMA.ipynb 152KB
03_ohlc_temperature.ipynb 114KB
06_Online_Linear_Regression.ipynb 86KB
04_The_three_steps_workflow.ipynb 29KB
Makefile 3KB
Makefile 1KB
Makefile 672B
Makefile 475B
README.md 34KB
05_reconstructing_the_light_curve_of_stars.md 24KB
07_Online_Time_Series_Prediction.md 19KB
08_Online_learning_in_non_stationary_environments.md 19KB
04_The_three_steps_workflow.md 14KB
06_Online_Linear_Regression.md 11KB
developer.md 8KB
CONTRIBUTING.md 7KB
CHANGELOG.md 5KB
02_Synchronize_data_streams.md 4KB
01_demo_EWMA.md 3KB
WEP.md 3KB
03_ohlc_temperature.md 2KB
README.md 133B
changelog.md 33B
gymfeedback_init_apply.pdf 27KB
gymfeedback_apply.pdf 26KB
module_with_state.pdf 25KB
agent_env.pdf 25KB
gymfeedback_init.pdf 24KB
env.pdf 24KB
agent.pdf 22KB
gymfeedback.pdf 7KB
ligh_curve_training.png 1016KB
gym_feeback.png 525KB
ligh_curve_predicting.png 256KB
ligh_curve_sampling.png 244KB
wax_logo.png 101KB
synchronize_data_streams.png 77KB
trailing_ohlc.png 67KB
ohlc.png 64KB
my_custom_function_on_dataset.png 62KB
synchronize_data_streams_pbar.png 40KB
gymfeedback_init_apply.png 21KB
wax_logo_250px.png 18KB
gymfeedback_apply.png 17KB
gymfeedback.png 15KB
agent_env.png 15KB
online_linear_regression_regret.png 12KB
env.png 11KB
gymfeedback_init.png 10KB
module_with_state.png 10KB
module_with_state.png 7KB
agent.png 7KB
favicon.png 4KB
stream.py 28KB
05_reconstructing_the_light_curve_of_stars.py 24KB
07_Online_Time_Series_Prediction.py 18KB
08_Online_learning_in_non_stationary_environments.py 18KB
unroll.py 14KB
04_The_three_steps_workflow.py 13KB
accessors.py 12KB
accessors_test.py 11KB
06_Online_Linear_Regression.py 11KB
conf.py 11KB
stream_test.py 10KB
ewma_numba.py 10KB
ewma_test.py 9KB
format.py 8KB
encode.py 8KB
online_supervised_learner_test.py 8KB
ewma.py 7KB
haiku_env_test.py 6KB
ewmcov.py 6KB
gym_feedback_test.py 6KB
online_optimizer.py 6KB
snarimax_test.py 6KB
ewmvar_test.py 6KB
callbacks.py 5KB
update_on_event_test.py 4KB
ewmvar.py 4KB
mask_std_test.py 4KB
ewmcov_test.py 4KB
encode_test.py 4KB
ewma_numba_test.py 4KB
update_on_event.py 4KB
snarimax.py 4KB
stateful.py 4KB
stateful_test.py 4KB
record.py 4KB
共 189 条
- 1
- 2
资源评论
极客11
- 粉丝: 386
- 资源: 5519
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- lsb-release,安装磐维数据库,安装oracle数据库等常用的依赖包
- redhat-lsb-core,安装磐维数据库,安装oracle数据库等常用的依赖包
- 丹佛丝堆垛机变频器参数配置起升、运行、货叉
- JSP学生学籍管理系统(源代码+论文+开题报告+外文翻译+答辩PPT).rar
- jsp医院病区管理系统(论文+中期检查表+任务书+综合材料).rar
- jsp研究生党建管理系统pc-毕业设计.rar
- JSP在线考试系统的设计与实现(源代码+论文).rar
- JSP在线CD销售系统(论文).rar
- jSP在线教学质量评价系统的设计与实现(源代码+论文).rar
- JSP自动排课管理系统(源代码+论文+开题报告).rar
- JSP在线学习系统设计(源代码+论文).rar
- JSP作业管理系统(源代码+论文).rar
- JSP自动排课系统(源代码+论文+开题报告).rar
- lerx2_utf8_v2_beta2_20121214.rar
- putty,linux客户端工具
- 提高Windows 11文件资源管理器显示文件夹大小功能
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功