Acquisition of Localization Confidence for Accurate Object Detection

所需积分/C币:11 2018-11-21 14:35:13 1.55MB PDF
收藏 收藏

Acquisition of Localization Confidence for Accurate Object Detection, 现代基于cnn的目标检测器依赖于包围盒回归和非最大抑制来定位对象。类标签的概率自然反映了分类的可信度,而本土化置信度却是不存在的。这使得适当的本地化包围盒在迭代回归过程中退化,甚至在NMS期间被抑制。在本文中,我们提出了IOU-网络学习来预测每个检测到的边界盒与匹配的地面真相之间的IOU。网络获得了定位的可信度,通过保持精确的定域包围盒,进一步改进了nms过程,提出了一种基于优化的包围盒细化方法,该方法将预测的loo描述为在ms-coco数据集上进行的有效实验,以及它与几种先进的目标探测器的兼容性和适应性。
Acquisition of localization Confidence for Accurate object Detection 3 procedure.(2)Second, the absence of localization confidence makes the widely adopted bounding box regression less interpretable. As an example, previous works 3 report the non-monotonicity of iterative bounding box regression. That is, bounding box regression may degenerate the loca ization of input, bounding boxes if applied for multiple times(shown as Figure 1(b)p In this paper we introduce loU-Net, which predicts the lou between detected bounding boxes and their corresponding ground-truth boxes, making the networks aware of the localization criterion analog to the classification module. This simpl coefficient provides us with new solutions to the aforementioned problems 1. loU is a nlatural criterion for localization accuracy. We can replace classified tion confidence with the predicted loU as the ranking keyword in NMs. This technique, namely IoU-guided NMS, help to eliminate the suppression failure caused by the misleading classification confidences 2. We present an optimization-based hounding box refinement. pr par with the traditional regression-based methods. during the inference, the predicted loU is used as the optimization objective, as well as an interpretable indicator of the localization confidence. The proposed Precise RoI Pooling ayer enables us to solve the loU optimization by gradient ascent. We show that compared with the regression-based method, the optimization-based bounding box refinement empirically provides a IlOnotoniic inprovement Ol the localization accuracy. The method is fully compatible with and can be integrated into various CNN-based detectors 163 10 2 Delving into object localization First of all, we explore two draw backs in object localization: the misalignment between classification confidence and localization accuracy and the non-monotonic bounding box regression. A standard FPN [16 detector is trained on MS-CoCO trainval35k as the baseline and tested on minimal for the stud 2.1 Misaligned classification and localization accuracy With the ob jective to remove duplicated bounding boxes, NMs has been an indispensable component in most object detectors since 4. NMS works in an iterative manner. At each iteration, the bounding box with the maximum lassification confidence is selected anld its neighboring boxes are eliminated usill a predefined overlapping threshold. In Soft-NMS 2 algorithm, box elimination is replaced by the dccrcmcnt of confidence, leading to a highcr rccall. Rccontly, a sct of learning-based algorithms have been proposed as alternatives to the parameter free NMS and Soft- NMS.24 calculates an overlap matrix of all bounding boxes and performs affinity propagation clustering to select exemplars of clusters as the final detection results. 11 proposes the Gossip Net, a post-processing network trained for NMS based on bounding boxes and the classification confidence. 12 proposes an end-to-end network learning the relation between detected bounding B. Jiang, R. Luo, J. Mao, T. Xiao, and Y Jiang loU with ground-truth box loU with ground-truth box (a) IoU vs. Classification Confidence (b)IoU vs. Localization Confidence Fig 2: The correlation between the lou of bounding boxes with the matched ground-truth and the classification/ localizatiOn confidence. Considering detected bounding boxes having an lot (>0.5)with the corresponding ground-truth, the Pcarson corrclation cocfficicnts arc:(a)0.217, and(b)0.617 (a) The classification confidence indicates the category of a bounding box, but cannot, be interpreted as the localization accuracy (b) To resolve the issue. we propose loU-Net to predict the localization confidence or each detected bounding box, i. e, its IoU with corresponding ground-truth o boxes. Ilowever, these parameter-based methods require more computational resources which limits their real-world application In the widely-adopted nms approach, the classification confidence is used fo ranking bounding boxes, which can be problematic. We visualize the distribution of classification confideNCes of all detected bounding boxes before NMS, as showll in Figure 2(a) The x-axis is the IoU between the detected box and its matched ground-truth, whilc the y-axis denotes its classification confidence. Thc Pcarson correlation coefficient indicates that the localization accuracy is not well correlated with the classification confidence We attribute this to the objective used by most of the cnn-based object detectors in distinguishing foreground(positive)samples from background (neg ative) samples. a detected bounding box bocdet is considered positive during training if its loU with one of the ground-truth bounding box is greater than a threshold Strain. This objective can be misaligned with the localization accu- racy. Figure1(a) shows cases where bounding boxes having higher classification confidence have poorer localization Rccall that in traditional NMs, when there exists duplicated dctcctions for a single object, the bounding box with maximum classification confidence will be preserved. However due to the misalignment, the bounding box with bette localization will probably get suppressed during the Nms, leading to the poor localization of objects. Figure 3 quantitatively shows the number of positive bounding boxes after NMS. The bounding boxes are grouped by their loU with the matched ground-truth. For multiple detections matched with the same Acquisition of localization Confidence for Accurate object Detection Fig. 3: The number of positive bound- 14000 fter the Nms 12000 oU-Guided NMs their lou with the matched ground L0000 truth. In traditional nms(blue bar),a No-NMS significant portion of accurately local- ized bounding boxes get mistakenly sup pressed due to the misalignment of clas sification confidence and localization ac 2000 curacy, while IoU-guided NMS(yellow bar)preserves morc accurately localized loU with ground-truth box ground-truth, only the one with the highest loU is considered positive. Therefore, O-NMS could be considered as the upper-bound for the number of positive bounding boxes We can see that the absence of localization confidence makes morc than half of detected bounding boxes with IoU >0.9 bcing suppressed in the traditional nms procedure, which degrades the localization quality of the detection results 2.2 Non-monotonic bounding box regression In general, single object localization can be classified into two categories: bound- ing box-based methods and segment-based methods. The segment-based meth 92013 10 aim to generate a pixel-level segment for each instance but, inevitably require additional segmentation annotation. This work focuses on the bounding box-based methods Single object localization is usually formulated as a bounding box regression task. The core idea is that a network directly learns to transform(i.e,, scale or shift)a bounding box to its designated target. In 9 8 linear regression or fully-cOllnlected layer is applied to refine the localization of object proposals generated by external pre-processing modules(e.g, Selective Search 28 or EdgeBoxes 33). Faster R-CNN 23 proposes region proposal network(RPN)in which only predefined anchors are used to train an end-to-end object detector 14 32 utilize anchor-free, fully-convolutional networks to handle object scale variation. Meanwhile, Repulsion Loss is proposed in 29 to robustly detect objects with crowd occlusion. Due to its effectiveness and simplicity, bounding box regression has beconme all essential conmponent ill Inost CNN-based detectors a broad set of downstream applications such as tracking and recognition will bencfit from accurately localized bounding boxes. This raises the demand for improving localization accuracy. In a series of object detectors 317621 refined boxes will be fed to the bounding box regressor again and go through the refinement for another time. This procedure is performed for several times namely iterative bounding box regression. Faster R-cnn 23 first performs the bounding box regression twice to transform predefined anchors into final detected bounding boxes. 15 proposes a group recursive learning approach to iteratively B. Jiang, R. Luo, J. Mao, T. Xiao, and Y Jiang 042 esed 0.375 0.37 egression Based 0.360 Iteration Times Iteration Times (a FPN (b)Cascade R-CNN Fig 4: Optimization-based v.s. Regression-based BBox refinement(a) Compari- son in FPN. When applying the regression iteratively, the AP of detection results firstly get improved but drops quickly in later iterations.(b)Camparison in Cascadc r-cnn. Iteration 0, 1 and 2 rcprcscnts the lst, 2nd and 3rd regression stages ill Cascade R-cnn. For iteration i>3, we refine the boulding boxes witll the regressor of the third st age. After multiple iteration, AP slightly drops, while the optimization-based method further improves the Ap by 0. 8% refine detection results and minimize the offsets between object proposals and the ground-truth considcring the global dependency among multiplc proposals G-CNN is proposed in 18 which starts with a multi-scale regular grid over the image and iteratively pushes the boxes in the grid towards the ground-trut h However, as reported in 3, applying bounding box regression more than twice brings no further improvement. 3 attribute this to the distribution mismatch in multi-step bounding box regression and address it by a resampling strategy in multi-stage bounding box regression Wc cxpcrimcntally show the performance of itcrativc bounding box regression based onl FPn and Cascade R-CNn frameworks. The Average Precision(AP)of the results after each iteration are shown as the blue curves in Figure 4(a)and Fi igure 4(b) respectively. The AP curves in Figure 4 show that the improvement on localization accuracy, as the number of iterations increase, is non-monotonic for iterative bounding box regression. The non-monotonicity, together with the non- interpretability, brings difficulties in applications. Besides, without localization confidence for detected bounding boxes, we can not have fine-grained control over the refinement, such as using an adaptive number of iterations for different bounding boxes 3 oU-Net To quantitatively analyze the effectiveness of loU prediction, we first present the methodology adopted for training an IoU predictor in Section 3. 1 In Section 3.2 and Section 3. 3 we show how to use loU predictor for NMS and bounding box Acquisition of localization Confidence for Accurate object Detection Standalone lou-net Jittered rols FPN FC FC 1024 1024 lol i Classification 1024 1024 RPN B-B-Reg Fig5: Full architecture of the proposed IoU-Net described in Section 3.4 Input images are first fed into an FPn backbone. The lou predictor takes the output features from the Fpn backbone. We replace the rol Pooling layer with a Prrol Pooling layer described in Section 3. 3 The IoU predictor shares a similar structure with the r-cnn branch. The modules marked within the dashed box form a standalone lou-net refinement, respectively. Finally in Section 3. 4 we integrate the loU predictor into existing object detectors such as FPN 16 3.1 Learning to predict IOU Shown in Figurc5 the IoU predictor takes visual fcaturcs from the FPN and estimates the localization accuracy(lou)for each bounding box. We generate bounding boxes and labels for training the loU-Net by augmenting the ground- truth, instead of taking proposals from RPNs. Specifically, for all ground-truth bounding boxes in the training set, we manually transform them with a set of randomized parameters, resulting in a candidate bounding box set. We then remove from this candidate set the bounding boxes having an loU less than Strain =0.5 with the matched ground-truth. We uniformly sample training data from this candidate set w.r. t. the loU. This data generation process empirically brings better performance and robustness to the IoU-Net. For each bounding box the features are extracted from the output of FPn with the proposed Precise RoI Pooling layer(see Section 3. 3). The features are then fed into a two-layer feed forward network for the loU prediction. For a better performance, we use class-aware loU predictors The loU predictor is compatible with most existing Rol-based detectors. The accuracy of a standalone tou predictor can he found in Figure As the training procedure is independent of specific detectors, it is robust to the change of the input distributions(e. g, when cooperates with different detectors). In later sections, we will further demonstrate how this module can be jointly optimized in a full detection pipeline(i.e, jointly with RPNs and R-CNN) B. Jiang, R. Luo, J. Mao, T. Xiao, and Y Jiang Algorit hm 1 ToU-guided NMs Classification confidence and loca. liza tion confi- dence are disentangled in the algorithm. We use the localization confidence(the predicted IoU) to rank all detected bounding boxes, and update the classification confidence based on a clustering-like rule Input: B=(61, ..,6n,S. I, 3nms B is a set of detected bounding boxes S and T are functions(neural networks) mapping bounding boxes to their classifi- cation confidence and loU estimation(localization confidence) respectively us is the nms threshold Output: D, the set of detected bounding boxes with classification scores while B≠edo L.8L9G秒87I bm f arg max I(bi) 4:B←B、{bm s←S(bm) forb,∈Bdo if loU(bm, bi)> Snms then S(b;) B←B\{b} d if end for 12: D+DU(bm, s,1 13: end while 14: ret 3.2 lOU-guided NMs We resolve the misalignment between classification confidence and localization accuracy with a novel ToU-guided Nms procedure, where the classification confi- dence and localization confidence(an estimation of the IoU) are disentangled In short, we use the predicted loU instead of the classification confidence as the ranking keyword for bounding boxes. Analog to the traditional NMs, the box having the highest loU with a ground-truth will be selected to eliminate all other boxes having an overlap greater than a given threshold Snms. To determine the classification scores, when a box i eliminates box j, we update the class fication confidence si of box i by Si= max(Si, Si). This procedure can also be interpreted as a confidence clustering: for a group of bounding boxes matching the same ground-truth, we take the most confident prediction for the class label A psuedo-code for this algorithm can be found in Algorithm I TOU-guided NMs resolves the misa. ignment between classification confidence and localization accuracy. Quantitative results show that our method outperforms traditional NMS and other variants such as Soft-NMS 2. Using IoU-guided NMS as the post-processor further pushes forward the performance of several state-of the-art object detectors Acquisition of Localization Confidence for Accurate Object Detection Algorithm 2 Optimization-based bounding box refinement. npu B is a set of detected bounding boxes, in the form of (r0, 10, 21, y1) f is the feature map of the input image steps. A is the step size, and n21 is rly-stop threshold and Q2<0 is an localization degeneration tolerance Function PrPool extracts the feature representation for a given bounding box and function lou denotes the estimation of lou by the loU-Net utput t of final detection bounding b 2 for i=1 tot do 3:forb;∈ B and b;≠Ado 4 gra (PrPool(F, biD 5 Preu score <loU(PrPool(, bi)) le(grad, b NewScore< IoU(PrPool(F,bi) if Prev Score- New Score< 321 or NewScore- Prec Score< Q2 then 9 ←A∪{b} 10: end if 11 nd 12 end for 13: return B 3.3 Bounding box refinement as an optimization procedure The problem of bounding box refinement can formulated mathematically as finding the optimal C=arg min crit(transform(box det: c), bo. tgt) where bo. det is the detected bounding box, boxgt is a(targeting) ground-truth bounding box and transform is a bounding box transformation function taking c as parameter and transform the given bounding box. crit is a criterion measur- ing the distance between two bounding boxes. In the original Fast R-CNn 5 framework. crit is chosen as an smooth-L1 distance of coordinates in log-scale while in 32, crit is chosen as the -In(IoU) between two bounding boxes Rcgrcssion-bascd algorithms directly estimate the optimal solution C* with a feed-forward neural network. However, iterative bounding box regression methods are vulnerable to the change in the input distribution 3 and may result in non monotonic localization improvement, as shown in Figure 4 To tackle these issues. we propose an optimization-based bounding box refinement method utilizing IoU-Net as a robust localization accuracy(loU) estimator. Furthermore, IOU estimator can be used as an early-stop condition to implement iterative refinement ptive steps LoU-Net directly estimates IoU(bo. det, bo. xgt). While the proposed Precise Rol Pooling layer enables the computation of the gradient of lou w.r.t. boundin B. Jiang, R. Luo, J. Mao, T. Xiao, and Y. Jiang 1. RoI Pooling 2. RoI Align 3. PrRol Pooling (x1」,y1」) (x1,y1) (x1,y1) U1,1 (r2,y/2 (x2,y) x2,2 ●●●J● bi) ∑f(an,b)∥N f(a, g)dcdy (|x2|-x1+1)×(|y2|-mn」+1) 2-1 Fig 6 Illustration of Rol Pooling, Rol Align and PrRol Pooling box coordinated we can directly use gradient ascent method to find the optimal solution to Equation I Shown in Algorithm 2 viewing the estimation of the IoU as an optimization objective, we iteratively refine the bounding box coordinates with the computed gradient and maxiMize the loU between the detected bounding box and its matched ground-truth. Besides, the predicted loU is an interpretable indicator of thc localization confidence on cach bounding box and helps cxplain the performed transformation. In the implementation, shown in Algorithm 2 Line 6, we manually scale up the gradient w.r.t. the coordinates with the size of the bounding box on that axis(e. g, we scale up Vx1 with width(bi)). This is equivalent to perform the optimization in log-scaled coordinates (c/w, y/h, log w, log h )as in 5. We also employ a one-step bounding box regression for an initialization of the coordinates Precise RoI Pooling. We introduce Precise Rol Pooling(PrRol Pooling, for short)powering our bounding box refinement F It avoids any quantization of coordinates and has a continuous gra radient on bounding box coordinates. Given the feature map F before Rol/PrRol Pooling(e.g. from Conv4 in ResNet-50 let wi; be the feature at one discrete location(i,j) on the feature map. Using bilinear interpolation, the discrete feature map can be considered continuous at any continuous coordinates(m, y) f(a,y)=∑C(m,y,1,)×0 2, whcrc IC( 3, i,j)=ma.(0, 1- i)x mar(0, 1-y- j) is the interpolation coefficient. Then denote a bin of a Rol as bin=1(1, 91), 2, y2)), where(a1, 31) and(2: 12)are the continuous coordinates of the top-left and bottom-right G.We prefer Precise Rol-Pooling layer to Rol-Align layer [10 as Precise RoI-Pooling er is continuously differentiable w r.t. the coordinates while rol-Align is not Thecodeisreleasedat:

试读 16P Acquisition of Localization Confidence for Accurate Object Detection
立即下载 身份认证VIP会员低至7折
  • 分享达人

关注 私信
Acquisition of Localization Confidence for Accurate Object Detection 11积分/C币 立即下载
Acquisition of Localization Confidence for Accurate Object Detection第1页
Acquisition of Localization Confidence for Accurate Object Detection第2页
Acquisition of Localization Confidence for Accurate Object Detection第3页
Acquisition of Localization Confidence for Accurate Object Detection第4页

试读结束, 可继续读1页

11积分/C币 立即下载