Online handwritten Chinese text recognition
(OHCTR) is a challenging problem as it involves a large-scale
character set, ambiguous segmentation, and variable-length input
sequences. In this paper, we exploit the outstanding capability
of path signature to translate online pen-tip trajectories into
informative signature feature maps using a sliding window-based
method, successfully capturing the analytic and geometric
properties of pen strokes with strong local invariance and
robustness. A multi-spatial-context fully convolutional recurrent
network (MC-FCRN) is proposed to exploit the multiple spatial
contexts from the signature feature maps and generate a
prediction sequence while completely avoiding the difficult
segmentation problem. Furthermore, an implicit language model
is developed to make predictions based on semantic context
within a predicting feature sequence, providing a new perspective
for incorporating lexicon constraints and prior knowledge about
a certain language in the recognition procedure. Experiments on
two standard benchmarks, Dataset-CASIA and Dataset-ICDAR,
yielded outstanding results, with correct rates of 97.10% and
97.15%, respectively, which are significantly better than the best
result reported thus far in the literature.
Human parsing has been extensively studied recently (Yamaguchi et al. 2012; Xia et al. 2017) due to its wide applications in many important scenarios. Mainstream fashion parsing models (i.e., parsers) focus on parsing the high-resolution
and clean images. However, directly applying the parsers
trained on benchmarks of high-quality samples to a particular application scenario in the wild, e.g., a canteen, airport
or workplace, often gives non-satisfactory performance due
to domain shift. In this paper, we explore a new and challenging cross-domain human parsing problem: taking the benchmark dataset with extensive pixel-wise labeling as the source
domain, how to obtain a satisfactory parser on a new target domain without requiring any additional manual labeling? To this end, we propose a novel and efficient crossdomain human parsing model to bridge the cross-domain differences in terms of visual appearance and environment conditions and fully exploit commonalities across domains. Our
proposed model explicitly learns a feature compensation network, which is specialized for mitigating the cross-domain
differences. A discriminative feature adversarial network is
introduced to supervise the feature compensation to effectively reduces the discrepancy between feature distributions
of two domains. Besides, our proposed model also introduces
a structured label adversarial network to guide the parsing
results of the target domain to follow the high-order relationships of the structured labels shared across domains. The
proposed framework is end-to-end trainable, practical and
scalable in real applications. Extensive experiments are conducted where LIP dataset is the source domain and 4 different datasets including surveillance videos, movies and runway shows without any annotations, are evaluated as target
domains. The results consistently confirm data efficiency and
performance advantages of the proposed method for the challenging cross-domain human parsing problem.
Abstract—This paper presents a robust Joint Discriminative
appearance model based Tracking method using online random
forests and mid-level feature (superpixels). To achieve superpixel-
wise discriminative ability, we propose a joint appearance model
that consists of two random forest based models, i.e., the
Background-Target discriminative Model (BTM) and Distractor-
Target discriminative Model (DTM). More specifically, the BTM
effectively learns discriminative information between the target
object and background. In contrast, the DTM is used to suppress
distracting superpixels which significantly improves the tracker’s
robustness and alleviates the drifting problem. A novel online
random forest regression algorithm is proposed to build the
two models. The BTM and DTM are linearly combined into
a joint model to compute a confidence map. Tracking results
are estimated using the confidence map, where the position
and scale of the target are estimated orderly. Furthermore,
we design a model updating strategy to adapt the appearance
changes over time by discarding degraded trees of the BTM and
DTM and initializing new trees as replacements. We test the
proposed tracking method on two large tracking benchmarks,
the CVPR2013 tracking benchmark and VOT2014 tracking
challenge. Experimental results show that the tracker runs at
real-time speed and achieves favorable tracking performance
compared with the state-of-the-art methods. The results also sug-
gest that the DTM improves tracking performance significantly
and plays an important role in robust tracking.
几篇gan论文。We investigate conditional adversarial networks as a
general-purpose solution to image-to-image translation
problems. These networks not only learn the mapping from
input image to output image, but also learn a loss func-
tion to train this mapping. This makes it possible to apply
the same generic approach to problems that traditionally
would require very different loss formulations. We demon-
strate that this approach is effective at synthesizing photos
from label maps, reconstructing objects from edge maps,
and colorizing images, among other tasks. As a commu-
nity, we no longer hand-engineer our mapping functions,
and this work suggests we can achieve reasonable results
without hand-engineering our loss functions either
OCR相关的一些论文
At present, text orientation is not diverse enough in the existing scene text datasets. Specifically, curve-orientated text is
largely out-numbered by horizontal and multi-oriented text, hence, it has received minimal attention from the community so
far. Motivated by this phenomenon, we collected a new scene text dataset, Total-Text, which emphasized on text orientations
diversity. It is the first relatively large scale scene text dataset that features three different text orientations: horizontal, multi-
oriented, and curve-oriented. In addition, we also study several other important elements such as the practicality and quality
of ground truth, evaluation protocol, and the annotation process. We believe that these elements are as important as the
images and ground truth to facilitate a new research direction. Secondly, we propose a new scene text detection model as the
baseline for Total-Text, namely Polygon-Faster-RCNN, and demonstrated its ability to detect text of all orientations.