[![](data/facenet-pytorch-banner.png)](https://xscode.com/timesler/facenet-pytorch)
[![Foo](https://xscode.com/assets/promo-banner.svg)](https://xscode.com/timesler/facenet-pytorch)
# Face Recognition Using Pytorch
[![Downloads](https://pepy.tech/badge/facenet-pytorch)](https://pepy.tech/project/facenet-pytorch)
[![Code Coverage](https://img.shields.io/codecov/c/github/timesler/facenet-pytorch.svg)](https://codecov.io/gh/timesler/facenet-pytorch)
| Python | 3.7 | 3.6 | 3.5 |
| :---: | :---: | :---: | :---: |
| Status | [![Build Status](https://travis-ci.com/timesler/facenet-pytorch.svg?branch=master)](https://travis-ci.com/timesler/facenet-pytorch) | [![Build Status](https://travis-ci.com/timesler/facenet-pytorch.svg?branch=master)](https://travis-ci.com/timesler/facenet-pytorch) | [![Build Status](https://travis-ci.com/timesler/facenet-pytorch.svg?branch=master)](https://travis-ci.com/timesler/facenet-pytorch) |
[![xscode](https://img.shields.io/badge/Available%20on-xs%3Acode-blue?style=?style=plastic&logo=appveyor&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAMAAACdt4HsAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAAZQTFRF////////VXz1bAAAAAJ0Uk5T/wDltzBKAAAAlUlEQVR42uzXSwqAMAwE0Mn9L+3Ggtgkk35QwcnSJo9S+yGwM9DCooCbgn4YrJ4CIPUcQF7/XSBbx2TEz4sAZ2q1RAECBAiYBlCtvwN+KiYAlG7UDGj59MViT9hOwEqAhYCtAsUZvL6I6W8c2wcbd+LIWSCHSTeSAAECngN4xxIDSK9f4B9t377Wd7H5Nt7/Xz8eAgwAvesLRjYYPuUAAAAASUVORK5CYII=)](https://xscode.com/timesler/facenet-pytorch)
This is a repository for Inception Resnet (V1) models in pytorch, pretrained on VGGFace2 and CASIA-Webface.
Pytorch model weights were initialized using parameters ported from David Sandberg's [tensorflow facenet repo](https://github.com/davidsandberg/facenet).
Also included in this repo is an efficient pytorch implementation of MTCNN for face detection prior to inference. These models are also pretrained. To our knowledge, this is the fastest MTCNN implementation available.
## Table of contents
* [Table of contents](#table-of-contents)vncser
* [Quick start](#quick-start)
* [Pretrained models](#pretrained-models)
* [Example notebooks](#example-notebooks)
+ [*Complete detection and recognition pipeline*](#complete-detection-and-recognition-pipeline)
+ [*Face tracking in video streams*](#face-tracking-in-video-streams)
+ [*Finetuning pretrained models with new data*](#finetuning-pretrained-models-with-new-data)
+ [*Guide to MTCNN in facenet-pytorch*](#guide-to-mtcnn-in-facenet-pytorch)
+ [*Performance comparison of face detection packages*](#performance-comparison-of-face-detection-packages)
+ [*The FastMTCNN algorithm*](#the-fastmtcnn-algorithm)
* [Running with docker](#running-with-docker)
* [Use this repo in your own git project](#use-this-repo-in-your-own-git-project)
* [Conversion of parameters from Tensorflow to Pytorch](#conversion-of-parameters-from-tensorflow-to-pytorch)
* [References](#references)
## Quick start
1. Install:
```bash
# With pip:
pip install facenet-pytorch
# or clone this repo, removing the '-' to allow python imports:
git clone https://github.com/timesler/facenet-pytorch.git facenet_pytorch
# or use a docker container (see https://github.com/timesler/docker-jupyter-dl-gpu):
docker run -it --rm timesler/jupyter-dl-gpu pip install facenet-pytorch && ipython
```
1. In python, import facenet-pytorch and instantiate models:
```python
from facenet_pytorch import MTCNN, InceptionResnetV1
# If required, create a face detection pipeline using MTCNN:
mtcnn = MTCNN(image_size=<image_size>, margin=<margin>)
# Create an inception resnet (in eval mode):
resnet = InceptionResnetV1(pretrained='vggface2').eval()
```
1. Process an image:
```python
from PIL import Image
img = Image.open(<image path>)
# Get cropped and prewhitened image tensor
img_cropped = mtcnn(img, save_path=<optional save path>)
# Calculate embedding (unsqueeze to add batch dimension)
img_embedding = resnet(img_cropped.unsqueeze(0))
# Or, if using for VGGFace2 classification
resnet.classify = True
img_probs = resnet(img_cropped.unsqueeze(0))
```
See `help(MTCNN)` and `help(InceptionResnetV1)` for usage and implementation details.
## Pretrained models
See: [models/inception_resnet_v1.py](models/inception_resnet_v1.py)
The following models have been ported to pytorch (with links to download pytorch state_dict's):
|Model name|LFW accuracy (as listed [here](https://github.com/davidsandberg/facenet))|Training dataset|
| :- | :-: | -: |
|[20180408-102900](https://drive.google.com/uc?export=download&id=12DYdlLesBl3Kk51EtJsyPS8qA7fErWDX) (111MB)|0.9905|CASIA-Webface|
|[20180402-114759](https://drive.google.com/uc?export=download&id=1TDZVEBudGaEd5POR5X4ZsMvdsh1h68T1) (107MB)|0.9965|VGGFace2|
There is no need to manually download the pretrained state_dict's; they are downloaded automatically on model instantiation and cached for future use in the torch cache. To use an Inception Resnet (V1) model for facial recognition/identification in pytorch, use:
```python
from facenet_pytorch import InceptionResnetV1
# For a model pretrained on VGGFace2
model = InceptionResnetV1(pretrained='vggface2').eval()
# For a model pretrained on CASIA-Webface
model = InceptionResnetV1(pretrained='casia-webface').eval()
# For an untrained model with 100 classes
model = InceptionResnetV1(num_classes=100).eval()
# For an untrained 1001-class classifier
model = InceptionResnetV1(classify=True, num_classes=1001).eval()
```
Both pretrained models were trained on 160x160 px images, so will perform best if applied to images resized to this shape. For best results, images should also be cropped to the face using MTCNN (see below).
By default, the above models will return 512-dimensional embeddings of images. To enable classification instead, either pass `classify=True` to the model constructor, or you can set the object attribute afterwards with `model.classify = True`. For VGGFace2, the pretrained model will output logit vectors of length 8631, and for CASIA-Webface logit vectors of length 10575.
## Example notebooks
### *Complete detection and recognition pipeline*
Face recognition can be easily applied to raw images by first detecting faces using MTCNN before calculating embedding or probabilities using an Inception Resnet model. The example code at [examples/infer.ipynb](examples/infer.ipynb) provides a complete example pipeline utilizing datasets, dataloaders, and optional GPU processing.
### *Face tracking in video streams*
MTCNN can be used to build a face tracking system (using the `MTCNN.detect()` method). A full face tracking example can be found at [examples/face_tracking.ipynb](examples/face_tracking.ipynb).
![](examples/tracked.gif)
### *Finetuning pretrained models with new data*
In most situations, the best way to implement face recognition is to use the pretrained models directly, with either a clustering algorithm or a simple distance metrics to determine the identity of a face. However, if finetuning is required (i.e., if you want to select identity based on the model's output logits), an example can be found at [examples/finetune.ipynb](examples/finetune.ipynb).
### *Guide to MTCNN in facenet-pytorch*
This guide demonstrates the functionality of the MTCNN module. Topics covered are:
* Basic usage
* Image normalization
* Face margins
* Multiple faces in a single image
* Batched detection
* Bounding boxes and facial landmarks
* Saving face datasets
See the [notebook on kaggle](https://www.kaggle.com/timesler/guide-to-mtcnn-in-facenet-pytorch).
### *Performance comparison of face detection packages*
This notebook demonstrates the use of three face detection packages:
1. facenet-pytorch
1. mtcnn
1.
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
本资源是以pytorch和facenet算法来实现的人工智能中人脸识别的功能,能实现人脸入库和人脸识别的串联操作,经测试,识别准确率高达百分之99,识别速度极快,在海康摄像头中可以进行识别识别人脸,里面不但可以进行视频识别也可以进行图片识别,运行简单易操作,下载即可使用。
资源推荐
资源详情
资源评论
收起资源包目录
人工智能+pytorch算法+人脸识别+速度快识别效果好 (119个子文件)
config 310B
description 73B
exclude 240B
tracked.gif 2.01MB
.gitignore 135B
.gitignore 50B
.gitignore 28B
.gitmodules 119B
HEAD 198B
HEAD 198B
HEAD 32B
HEAD 23B
pack-cd05ebda8857ba3e07c04b921c885b295b9b33ee.idx 35KB
pytorch-facenet.iml 450B
index 5KB
face_tracking.ipynb 384KB
lfw_evaluate.ipynb 17KB
infer.ipynb 8KB
finetune.ipynb 7KB
1.jpg 4.36MB
1.jpg 3.72MB
1.jpg 2.63MB
1.jpg 921KB
1.jpg 833KB
1.jpg 767KB
1.jpg 760KB
1.jpg 441KB
1.jpg 389KB
1.jpg 323KB
multiface.jpg 294KB
1.jpg 53KB
1.jpg 36KB
1.jpg 4KB
2.jpg 4KB
1.jpg 4KB
1.jpg 4KB
3.jpg 4KB
4.jpg 4KB
2.jpg 4KB
1.jpg 4KB
3.jpg 4KB
1.jpg 4KB
5.jpg 4KB
4.jpg 4KB
1.jpg 4KB
master 198B
master 41B
README.md 12KB
LICENSE.md 1KB
2.mp4 260.76MB
1.mp4 260.76MB
11.mp4 49.85MB
5.mp4 25.43MB
4.mp4 11.81MB
3.mp4 5.2MB
video.mp4 2.3MB
video_tracked.mp4 2.02MB
2.mp4 932KB
pack-cd05ebda8857ba3e07c04b921c885b295b9b33ee.pack 22.87MB
packed-refs 770B
multiface_detected.png 1.23MB
facenet-pytorch-banner.png 195KB
1.png 78KB
1.png 76KB
1.png 76KB
1.png 75KB
1.png 75KB
performance-comparison.png 12KB
20180402-114759-vggface2.pt 106.71MB
onet.pt 1.49MB
rnet.pt 394KB
pnet.pt 28KB
database.pt 10KB
names.pt 219B
mtcnn.py 21KB
tensorflow2pytorch.py 16KB
detect_face.py 12KB
inception_resnet_v1.py 11KB
travis_test.py 7KB
training.py 5KB
download.py 4KB
facenet_pytorch_shibie_video.py 3KB
facenet_pytorch_ruku.py 3KB
facenet_pytorch_shibie.py 2KB
facenet_pytorch_shibie_image.py 2KB
setup.py 1KB
main.py 1KB
perf_test.py 1005B
__init__.py 403B
mtcnn.cpython-36.pyc 17KB
detect_face.cpython-36.pyc 10KB
inception_resnet_v1.cpython-36.pyc 10KB
training.cpython-36.pyc 5KB
download.cpython-36.pyc 3KB
__init__.cpython-36.pyc 620B
pre-rebase.sample 5KB
update.sample 4KB
fsmonitor-watchman.sample 3KB
pre-commit.sample 2KB
prepare-commit-msg.sample 1KB
共 119 条
- 1
- 2
萧鼎
- 粉丝: 1w+
- 资源: 87
下载权益
C知道特权
VIP文章
课程特权
开通VIP
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
- 1
- 2
前往页