# TensorFlow Similarity: Metric Learning for Humans
TensorFlow Similarity is a [TensorFLow](https://tensorflow.org) library focused
on making metric learning easy. TensorFlow similarity is still in beta version
with some features, such as semi-supervised not yet implementd.
## Introduction
Tensorflow Similarity offers state-of-the-art algorithms for metric learning and
all the needed components to research, train, evaluate and serve models
that learn from similar looking examples. With it you can quickly and easily:
- Train and serve model that allow to find similar items, such as images,
from large indexes.
- Perform semi-supervised or self-supervised training to
train/boost classification models when you have a large corpus with
few labeled examples. **Not yet available**.
### Supervised models
Metric learning objective function is different from traditional classification:
- *Supervised models* learn to output a metric embeddings (1D float tensor)
that exhibit the property that if two examples are close in the real world,
their embeddings will be close in the
projected [metric space](https://en.wikipedia.org/wiki/Metric_space).
Representing items by their metrics embeddings allow to build
indexes that contains "classes" that were not seen during training,
add classes to the index without retraining, and only requires
to have a few examples per classes both for training and retriving.
This ability to operate on few examples per class is sometime
refered as few-shot learning in the litterature.
What makes retrieving similar items from the index very efficient is that
metric learning allows to use [Approximate Nearest Neighboors Search](https://en.wikipedia.org/wiki/Nearest_neighbor_search) to perform the search on the embedding times in sublinear
time instead of using the standard [Nearest Neighboors Search](https://en.wikipedia.org/wiki/Nearest_neighbor_search) which take a quadratic time.
In practice TensorFlow Similarity built-in `Index()` by leveraging
the [NMSLIB](https://github.com/nmslib/nmslib) can find the closest items
in a fraction of second even when the index contains over 1M elements.
- **Self-supervised contrastive model** help train more accurate models by
peforming a large-scale pretraining that aim at learning a consistent
representation of the data by "contrasting" different representation of
the same example generated via data augmentation and/or contrasting the
representation of different examples to separate then. Then the model is
fine-tuned on the few labeled examples like any classification model.
**This part is still a work in progress**
Overall Tensorflow Similarity well-tested composable components
follow Keras best practices to ensure they can be seamlessly integrated
into your TensorFlow workflows and get you results faster whether you
are doing research or building innovative products.
## What's new
- August 2021 (v0.13.x): Added many new contrastives losses
including Circle Loss, PNLoss, LiftedStructure Loss and
Multisimilarity Loss.
For previous changes - see the [changelog -- Fixme](FIXME)
## Getting Started
### Installation
Use pip to install the library
```python
pip install tensorflow_similarity
```
### Documentation
The detailed and narrated notebooks are a good way to get started
with TensorFlow Similarity. There is likely to be one that is similar to
your data or your problem (if not, let us know). You can start working with
the examples immediately in Google Colab by clicking the Google colab icon.
For more information about specific functions, you can [check the API documentation -- FIXME]()
## Supported Algorithms
### Supervised learning
| name | Description |
| ----------- | ----------- |
| Triplet Loss | |
| PN Loss | |
| Multi Loss | |
| Circle Loss | |
## Package components
![TensorFlow Similarity Overview](api/images/tfsim_overview.png)
TensorFlow Similiarity, as visible in the diagram above, offers the following
components to help research, train, evaluate and serve metric models:
- **`SimilarityModel()`**: This class subclasses the `tf.keras.model` class and extends it with additional properties that are useful for metric learning. For example it adds the methods:
1. `index()`: Enables indexing of the embedding
2. `lookup()`: Takes samples, calls predict(), and searches for neighbors within the index.
- **`MetricLoss()`**: This virtual class, that extends the `tf.keras.Loss` class, is the base class from which Metric losses are derived from. This subclassing ensures proper error checking, i.e., ensures the user is using a loss metric to train the models, perform better static analysis, and enforces additional constraints such as having a distance function that is supported by the index. Additionally, Metric losses make use of the fully tested and highly optimized pairwise distances functions provided by TF Similarity that are available under the `Distances.*` classes.
- **`Samplers()`**: Samplers are meant to ensure that each batch has at least n (with n >=2) examples of each class, as losses such as TripletLoss can’t work properly if this condition is not met. TF similarity offers an in-memory sampler for small dataset and a TFRecordDatasets for large scales one.
- **`Indexer()`**: The Indexer and its sub-component are meant to index known embeddings alongside their metadata. The embedding metadata is stored within `Table()`, while the `Matcher()` is used to perform [fast approximate neighboor searches](https://en.wikipedia.org/wiki/Nearest_neighbor_search) that are meant to quickly retrieve the indexed elements that are the closest to the embeddings supplied in the `lookup()` and `single_lookup()` function.
The `Evaluator()` component is used to compute EvalMetrics() on the specific index for evaluation and calibration purpose.
The default `Index()` sub-compoments run in-memory and are optimized to be used in interactive settings such as jupyter notebooks, colab, and metric computation during training (e.g using the `EvalCallback()` provided). Index are serialized as part of `model.save()` so you can reload them via `model.index_load()` for serving purpose or futher training / evaluation.
The default implementation can scale up to medium deployement (1M-10M+ points) easily provided the computers used have enough memory. For very large scale deployement you will need to sublcass the compoments to match your own architetctue. See FIXME colab to see how to deploy TF simialrity in production.
For more information about a given component head to the [API documentation](FIXME) or read [the TensorFlow Similarity paper](FIXME).
## Citing
Please cite this reference if you use any part of TensorFlow similarity
in your research:
```bibtex
@article{EBSIM21,
title={TensorFlow Similarity: A Usuable, High-Performance Metric Learning Library},
author={Elie Bursztein, James Long, Shun Lim, Owen Vallis, Francois Chollet},
journal={Fixme},
year={2021}
}
```
## Disclaimer
This is not an official Google product.
没有合适的资源?快使用搜索试试~ 我知道了~
tensorflow_similarity-0.13.4.tar.gz
需积分: 1 0 下载量 117 浏览量
2024-03-24
23:50:59
上传
评论
收藏 56KB GZ 举报
温馨提示
Python库是一组预先编写的代码模块,旨在帮助开发者实现特定的编程任务,无需从零开始编写代码。这些库可以包括各种功能,如数学运算、文件操作、数据分析和网络编程等。Python社区提供了大量的第三方库,如NumPy、Pandas和Requests,极大地丰富了Python的应用领域,从数据科学到Web开发。Python库的丰富性是Python成为最受欢迎的编程语言之一的关键原因之一。这些库不仅为初学者提供了快速入门的途径,而且为经验丰富的开发者提供了强大的工具,以高效率、高质量地完成复杂任务。例如,Matplotlib和Seaborn库在数据可视化领域内非常受欢迎,它们提供了广泛的工具和技术,可以创建高度定制化的图表和图形,帮助数据科学家和分析师在数据探索和结果展示中更有效地传达信息。
资源推荐
资源详情
资源评论
收起资源包目录
tensorflow_similarity-0.13.4.tar.gz (62个子文件)
tensorflow_similarity-0.13.4
setup.py 2KB
LICENSE 11KB
PKG-INFO 650B
tests
__init__.py 107B
samplers
__init__.py 0B
test_memory_samplers.py 2KB
test_distance_metrics.py 3KB
evaluators
__init__.py 0B
test_memory_evaluator.py 5KB
test_distances.py 3KB
test_model.py 1KB
test_losses.py 4KB
test_algebra.py 3KB
matchers
__init__.py 0B
test_nmslib_matcher.py 2KB
test_indexer.py 5KB
test_callbacks.py 2KB
test_metrics.py 4KB
tables
__init__.py 0B
test_memory_table.py 2KB
conftest.py 135B
tensorflow_similarity.egg-info
SOURCES.txt 2KB
top_level.txt 28B
PKG-INFO 650B
requires.txt 234B
dependency_links.txt 1B
tensorflow_similarity
utils.py 565B
__init__.py 23B
losses.py 19KB
samplers
utils.py 1KB
__init__.py 226B
samplers.py 5KB
memory_samplers.py 7KB
tddataset_sampler.py 5KB
metrics.py 15KB
evaluators
evaluator.py 3KB
__init__.py 102B
memory_evaluator.py 10KB
layers.py 771B
algebra.py 3KB
visualization.py 3KB
matchers
__init__.py 86B
matcher.py 3KB
nmslib_matcher.py 5KB
indexer.py 23KB
types.py 4KB
losses
utils.py 4KB
__init__.py 170B
metric_loss.py 3KB
circle_loss.py 5KB
triplet_loss.py 7KB
pn_loss.py 8KB
models
__init__.py 54B
similarity_model.py 21KB
callbacks.py 6KB
tables
__init__.py 79B
table.py 3KB
memory_table.py 6KB
distances.py 8KB
distance_metrics.py 6KB
setup.cfg 38B
README.md 7KB
共 62 条
- 1
资源评论
程序员Chino的日记
- 粉丝: 3654
- 资源: 5万+
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 学成在线-pc布局案例
- 数据集-目标检测系列- 戒指 检测数据集 ring >> DataBall
- 数据集-目标检测系列- 皇冠 头饰 检测数据集 crown >> DataBall
- 利用哨兵 2 号卫星图像和 GRanD 大坝数据集进行的首次大坝检测迭代.ipynb
- 数据集-目标检测系列- 红色裙子 检测数据集 red-skirt >> DataBall
- DNS服务器搭建-单机部署
- 数据集-目标检测系列- 猫咪 小猫 检测数据集 cat >> DataBall
- matlab写的导弹轨迹代码
- 金融贷款口子超市V2源码 Thinkphp开发的贷款和超市平台源码
- 数据集-目标检测系列- 土拨鼠 检测数据集 marmot >> DataBall
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功