• CVPR2019-ocr.zip

    Online handwritten Chinese text recognition (OHCTR) is a challenging problem as it involves a large-scale character set, ambiguous segmentation, and variable-length input sequences. In this paper, we exploit the outstanding capability of path signature to translate online pen-tip trajectories into informative signature feature maps using a sliding window-based method, successfully capturing the analytic and geometric properties of pen strokes with strong local invariance and robustness. A multi-spatial-context fully convolutional recurrent network (MC-FCRN) is proposed to exploit the multiple spatial contexts from the signature feature maps and generate a prediction sequence while completely avoiding the difficult segmentation problem. Furthermore, an implicit language model is developed to make predictions based on semantic context within a predicting feature sequence, providing a new perspective for incorporating lexicon constraints and prior knowledge about a certain language in the recognition procedure. Experiments on two standard benchmarks, Dataset-CASIA and Dataset-ICDAR, yielded outstanding results, with correct rates of 97.10% and 97.15%, respectively, which are significantly better than the best result reported thus far in the literature.

    0
    195
    332.47MB
    2020-05-28
    10
  • 基于深度学习的文字识别技术现状及发展趋势.pdf

    为线上查找到的金莲文老师演讲ppt资源;主要讲述了深度学习下文字的识别现状,应用场景、发展趋势。主要讲述了深度学习下文字的识别现状,应用场景、发展趋势。主要讲述了深度学习下文字的识别现状,应用场景、发展趋势。主要讲述了深度学习下文字的识别现状,应用场景、发展趋势。

    0
    1240
    116.14MB
    2020-05-28
    37
  • 人脸识别、行人ReID图像分割

    Human parsing has been extensively studied recently (Yamaguchi et al. 2012; Xia et al. 2017) due to its wide applications in many important scenarios. Mainstream fashion parsing models (i.e., parsers) focus on parsing the high-resolution and clean images. However, directly applying the parsers trained on benchmarks of high-quality samples to a particular application scenario in the wild, e.g., a canteen, airport or workplace, often gives non-satisfactory performance due to domain shift. In this paper, we explore a new and challenging cross-domain human parsing problem: taking the benchmark dataset with extensive pixel-wise labeling as the source domain, how to obtain a satisfactory parser on a new target domain without requiring any additional manual labeling? To this end, we propose a novel and efficient crossdomain human parsing model to bridge the cross-domain differences in terms of visual appearance and environment conditions and fully exploit commonalities across domains. Our proposed model explicitly learns a feature compensation network, which is specialized for mitigating the cross-domain differences. A discriminative feature adversarial network is introduced to supervise the feature compensation to effectively reduces the discrepancy between feature distributions of two domains. Besides, our proposed model also introduces a structured label adversarial network to guide the parsing results of the target domain to follow the high-order relationships of the structured labels shared across domains. The proposed framework is end-to-end trainable, practical and scalable in real applications. Extensive experiments are conducted where LIP dataset is the source domain and 4 different datasets including surveillance videos, movies and runway shows without any annotations, are evaluated as target domains. The results consistently confirm data efficiency and performance advantages of the proposed method for the challenging cross-domain human parsing problem. Abstract—This paper presents a robust Joint Discriminative appearance model based Tracking method using online random forests and mid-level feature (superpixels). To achieve superpixel- wise discriminative ability, we propose a joint appearance model that consists of two random forest based models, i.e., the Background-Target discriminative Model (BTM) and Distractor- Target discriminative Model (DTM). More specifically, the BTM effectively learns discriminative information between the target object and background. In contrast, the DTM is used to suppress distracting superpixels which significantly improves the tracker’s robustness and alleviates the drifting problem. A novel online random forest regression algorithm is proposed to build the two models. The BTM and DTM are linearly combined into a joint model to compute a confidence map. Tracking results are estimated using the confidence map, where the position and scale of the target are estimated orderly. Furthermore, we design a model updating strategy to adapt the appearance changes over time by discarding degraded trees of the BTM and DTM and initializing new trees as replacements. We test the proposed tracking method on two large tracking benchmarks, the CVPR2013 tracking benchmark and VOT2014 tracking challenge. Experimental results show that the tracker runs at real-time speed and achieves favorable tracking performance compared with the state-of-the-art methods. The results also sug- gest that the DTM improves tracking performance significantly and plays an important role in robust tracking.

    0
    703
    26.39MB
    2020-05-27
    50
  • 对抗学习-图像生成Gan.zip

    几篇gan论文。We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss func- tion to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demon- strate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. As a commu- nity, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either

    0
    134
    22.55MB
    2020-05-25
    15
  • 人流统计及视频人流属性分析相关监控专利.zip

    安防监控相关的,人流统计、人脸属性分析专利。一种人群跟踪及人流量统计的方法及装置;人流量统计的方法及装置;一种基于人脸属性的人群分析方法及装置。

    0
    113
    2.8MB
    2020-05-21
    15
  • 单向准连通的表格线检测算法_彭绍湖.pdf

    针 对 表 格 框线存 在 倾 抖 破 裂 断裂及 字符 与 表线 粘 连 等情 况 对 表格 框 线的 检 浏 方 法 进行 了 深入 研 究了 表格 框 线 检 浏 与处 理 相 结 合 的 方 法获 取 表 线。采 用在表 格框线 检 浏 中 提 出 基 于 单 向 准 连 通 的 检 浏 方 法 有 效 地 克 服 了框 线 的 倾 料 破裂及 拈连 等情 况 在 表 格 框线 的 处 理 中 采 用 对 检 浏 线 的 连 接 和 筛 选 的 方 法 有 效 解 决 了 表 格 框 线 断 裂的 问 题。通过 大 1 的 实 脸 表 明 该 方 法 能 取 得 较好 的 检 浏 效果

    0
    149
    465KB
    2020-05-20
    6
  • handwriting.zip

    OCR相关的一些论文 At present, text orientation is not diverse enough in the existing scene text datasets. Specifically, curve-orientated text is largely out-numbered by horizontal and multi-oriented text, hence, it has received minimal attention from the community so far. Motivated by this phenomenon, we collected a new scene text dataset, Total-Text, which emphasized on text orientations diversity. It is the first relatively large scale scene text dataset that features three different text orientations: horizontal, multi- oriented, and curve-oriented. In addition, we also study several other important elements such as the practicality and quality of ground truth, evaluation protocol, and the annotation process. We believe that these elements are as important as the images and ground truth to facilitate a new research direction. Secondly, we propose a new scene text detection model as the baseline for Total-Text, namely Polygon-Faster-RCNN, and demonstrated its ability to detect text of all orientations.

    0
    71
    44.52MB
    2020-05-20
    9
  • tiplog.odt

    要进行准确的人流密度估计,面临了如下的难点 1.低分辨率:可以看看UCF Crowd Counting 50这个数据集,在很多密集的情况下,一个人头的pixel可能只有5*5甚至更小,这就决定了基于检测的很多方法都行不通; 2.遮挡严重:在人群中,头肩模型都难以适用更不用说人体模型,头部之间的遮挡都挺严重; 3.透视变换:简而言之就是近大远小,什么尺度的头部都可能出现。

    0
    100
    1.78MB
    2020-05-19
    10
  • paper——crf,attention

    一些深度学习论文;senet ,east,pixel-anchor,face-detection 等;senet ,east,pixel-anchor,face-detection 等;senet ,east,pixel-anchor,face-detection 等;senet ,east,pixel-anchor,face-detection 等;senet ,east,pixel-anchor,face-detection 等;senet ,east,pixel-anchor,face-detection 等;senet ,east,pixel-anchor,face-detection 等论文。

    0
    72
    46.06MB
    2020-05-18
    9
  • 大数运算c++

    大数运算代码;写了大数运算基本的加减乘除、矩阵求逆运算及矩阵加减乘除。采用字符数组的方式实现,可以通过宏来设定所需要的精度,包括小数点后的位数。

    0
    620
    5KB
    2016-07-31
    50
  • 持续创作

    授予每个自然月内发布4篇或4篇以上原创或翻译IT博文的用户。不积跬步无以至千里,不积小流无以成江海,程序人生的精彩需要坚持不懈地积累!
  • 分享精英

    成功上传11个资源即可获取
  • 分享达人

    成功上传6个资源即可获取
  • 创作能手

    授予每个自然周发布1篇到3篇原创IT博文的用户
关注 私信
上传资源赚积分or赚钱