# CNTKx
CNTKx is a deep learning library that builds on and extends Microsoft Cognitive Toolkit [CNTK](https://github.com/Microsoft/CNTK).
Despite the last planned release of cntk 2.7, cntkx will continue to be in active development, more models and pre-built components coming soon!
Feel free to open an issue for any request or a PR to contribute :)
## Installation
cntkx is written in pure python and cntk is a dependency to it. Please get a working installation of cntk first. Then:
pip install cntkx
cntkx only works with `python>=3.6`
## Available Components
| ops | Description |
| --- | ---|
| `floor_division` | element-wise floor_division |
| `remainder` | element-wise remainder of division |
| `scalar` | cast tensor to scalar (1,) |
| `cumsum` | Cumulative summation along axis |
| `upsample` | Upsample by k factor (for image) |
| `centre_crop` | Crop centre of image |
| `swish` | Activation |
| `mish` | Activation |
| `hardmax` | Activation |
| `erf` | Error function |
| `gelu` | Gaussian Error Linear Unit function |
| `gelu_fast` | fast approximation of Gaussian Error Linear Unit function |
| `sequence.pad` | Pad at start or end of sequence axis |
| `sequence.length` | length of sequence |
| `sequence.position` | position of every sequence element |
| `sequence.stride` | strides across sequential axis |
| `sequence.join` | joins two sequence along their sequential axis |
| `sequence.window` | creates sliding window along the sequence axis |
| `sequence.window_causal` | creates causal sliding window along the sequence axis |
| `sequence.reverse` | reverses the items along the dynamic sequence axis |
| `sequence.reduce_mean` | calculates the mean along the dynamic sequence axis |
| `sequence.reduce_concat_pool` | drop-in replace for sequence.last |
| `random.sample` | Samples an unnormalised log probability distribution |
| `random.sample_with_bias` | Samples an unnormalised log probability distribution over-weighted to more probable classes |
| `random.sample_top_k` | Samples from the top_k of an unnormalised log probability distribution |
| `batchmatmul` | Batch Matrix Multiplication on a static batch axis, similar to tf.matmul |
| Layers | Description |
| --- | ---|
| `QRNN` | Quasi-Recurrent Neural Network |
| `Recurrence` | With option to apply `VariationalDroppout` |
| `PyramidalBiRecurrence` | Pyramidal bi-directional recurrence |
| `VariationalDropout` | Single binary dropout mask for entire sequence |
| `SinusoidalPositionalEmbedding` | Non-learnable positional embedding (no max sequence length) |
| `PositionalEmbedding` | Learnable Positional Embedding (used in BERT) |
| `BertEmbeddings` | BERT Embeddings (word + token_type + positional) |
| `BertPooler` | Pooler used in BERT |
| `SpatialPyramidPooling` | Fixed pooled representation regardless of image input size |
| `GatedLinearUnit` | Gated Convolutional Neural Network |
| `ScaledDotProductAttention` | Attention used in BERT and Transformer (aka 'attention is all you need') |
| `MultiHeadAttention` | Attention used in BERT and Transformer (aka 'attention is all you need') |
| `GaussianWindowAttention` | Windowed attention instead of conventional attention where everything is attended at the same time |
| `SequentialDense` | Applies Dense to a window of sequence item along sequence axis |
| `SequentialMaxPooling` | Max pool across sequential axis and static axes |
| `SequentialAveragePooling` | Average pool across sequential axis and static axes |
| `SequentialConcatPooling` | Concat Average and Mean pool across sequential axis and static axes |
| `vFSMN` | Vectorised Feedforward Sequential Memory Networks |
| `cFSMN` | Compact Feedforward Sequential Memory Networks |
| `BiRecurrence` | BiRecurrence recurrent layer with weight tying option to half parameter requirement |
| `GlobalConcatPooling` | Global spatial concat pooling of ave and mean |
|`FilterResponseNormalization`| Drop in replacement for batch norm with superior performance |
|`Boom`| More parametrically efficient alternative to Position-Wise FeedForward layer found in Transformer |
|`GaussianAttentionSeqImage`| Memory efficient attention that used 2d gaussian filters for images |
| Blocks | Description |
| --- | ---|
| `WeightDroppedLSTM` | A form of regularised LSTM |
| `IndyLSTM` | A parameter efficient form of LSTM |
| `IndRNN` | a RNN with long memory and can be stacked deeply |
| Loss | Description |
| --- | ---|
| `gaussian_mdn_loss` | loss function when using Mixture density network |
| `focal_loss_with_softmax` | A kind of cross entropy that handles extreme class imbalance |
| `cross_entropy_with_softmax` | Added `label smoothing regularisation` in cross entropy with softmax |
| `generalised_robust_barron_loss` | generalised robust loss |
| Models | Description |
| --- | ---|
| `VGG` | Image Classification |
| `UNET` | Semantic Segmentation |
| `Transformer` | Language Modelling |
| `MDN` | Mixture Density Networks |
| Pre-trained models | Description |
| --- | ---|
| `Bert` | Bidirectional Encoder Representations from Transformers |
| [fwd_wt103.hdf5](https://1drv.ms/u/s!AjJ4XyC3prp8mItNxiawGK4gD8iMhA?e=wh7PLB) | The weight parameters of the fastai's pytorch model. To be used to initialise `PretrainedWikitext103LanguageModel` |
| [fwd_wt103.cntk](https://1drv.ms/u/s!AjJ4XyC3prp8mItPBdfmDYr9QP7J4w?e=k1BXlW) | The converted cntk model of fastai's pytorch model. To be used with `C.load_model` |
| [fwd_wt103.onnx](https://1drv.ms/u/s!AjJ4XyC3prp8mItO70T_q8HOPwa6aQ?e=h2Fiv5) | The converted ONNX model of fastai's pytorch model. |
| Learners | Description |
| --- | ---|
| `CyclicalLearningRate` | a method to eliminate the need to find best value and schedule for learning rate |
| `RAdam` | a variant of `Adam` that doesn't require any warmup |
| Misc | Description |
| --- | ---|
| `CTCEncoder` | Helper class to convert data into a format acceptable for cntk's ctc implementation |
## C# CNTK Tutorials
This library is implemented in pure cntk python API. For help in cntk c#, you can refer to the two repository
[deep-learning-with-csharp-and-cntk](https://github.com/anastasios-stamoulis/deep-learning-with-csharp-and-cntk)
and [DeepBelief_Course4_Examples](https://github.com/AllanYiin/DeepBelief_Course4_Examples).
## F# CNTK
For the F# wrapper of CNTK please visit [FsCNTK](https://github.com/fwaris/FsCNTK),
it also contains some example implementations like seq2seq, autoencoder, LSTM, GAN.
## News
***2020-04-15***
### Added `GaussianAttentionSeqImage`
`GaussianAttentionSeqImage` is gaussian 2d spatial attention implementation.
To use, the encoded image used be formulated as a cntk sequence. This can be useful when
you are constraint by gpu memory as 2d gaussian attention is more memory efficient than standard attention.
This is from the deepmind paper, DRAW: A Recurrent Neural Network for Image Generation by Gregor et al
More details can be found in the following https://arxiv.org/abs/1502.04623
Example:
n = 5
num_channels = 3
image_height = 64
expected_image_width = 1000
image_seq = C.sequence.input_variable((num_channels, image_height)) # rgb image with variable width and fixed height
decoder_hidden_state = ... # from decoder somewhere in the network
attended_image = Cx.layers.GaussianAttentionSeqImage(n, image_height, expected_image_width)(image_seq, decoder_hidden_state)
assert attended_image.shape == (num_channels, n, n)
***2020-03-29.***
#### Added `generalised_robust_barron_loss` and `sample_with_bias`
`generalised_robust_barron_loss` is a generalisation for generalization of the Cauchy/Lorentzian,
Geman-McClure, Welsch/Leclerc, generalized Charbonnier, Charbonnier/pseudo-Huber/L1-L2, and
L2 loss functions.
Can be used as a drop-in replacement in any regression task that you have.
For
没有合适的资源?快使用搜索试试~ 我知道了~
PyPI 官网下载 | cntkx-0.1.42.tar.gz
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
0 下载量 119 浏览量
2022-01-10
01:21:40
上传
评论
收藏 108KB GZ 举报
温馨提示
共46个文件
py:39个
txt:3个
pkg-info:2个
资源来自pypi官网。 资源全名:cntkx-0.1.42.tar.gz
资源推荐
资源详情
资源评论
收起资源包目录
cntkx-0.1.42.tar.gz (46个子文件)
cntkx-0.1.42
setup.cfg 42B
cntkx
learners
tests
__init__.py 0B
test_clr.py 2KB
test_learners.py 662B
__init__.py 15KB
misc
tests
test_ctc_encoder.py 6KB
__init__.py 1KB
__init__.py 8KB
models
__init__.py 0B
layers
sequence.py 16KB
tests
qrnn.py 2KB
layers_test.py 38KB
attention_tests.py 23KB
test_sequence.py 5KB
blocks_performance.py 4KB
__init__.py 0B
models_test.py 1KB
test_blocks.py 999B
models
attention.py 42KB
__init__.py 72B
vision.py 9KB
language.py 5KB
__init__.py 94B
blocks.py 23KB
layers.py 91KB
__init__.py 85B
ops
random
tests
__init__.py 0B
test_random_ops.py 2KB
__init__.py 4KB
tests
ops_test.py 11KB
__init__.py 0B
__init__.py 21KB
sequence
tests
seq_ops_test.py 8KB
__init__.py 0B
__init__.py 13KB
losses
tests
mixture_density_network.py 3KB
losses_test.py 4KB
__init__.py 0B
__init__.py 12KB
README.md 40KB
PKG-INFO 47KB
cntkx.egg-info
dependency_links.txt 1B
PKG-INFO 47KB
SOURCES.txt 1KB
top_level.txt 6B
setup.py 670B
共 46 条
- 1
资源评论
挣扎的蓝藻
- 粉丝: 13w+
- 资源: 15万+
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功