没有合适的资源?快使用搜索试试~ 我知道了~
All in One Bad Weather Removal Using Architectural Search.pdf
需积分: 17 206 浏览量
2021-10-19
16:51:44
上传
评论
收藏 5.92MB PDF 举报
温馨提示
CVPR
资源推荐
资源详情
资源评论




















All in One Bad Weather Removal using Architectural Search
Ruoteng Li
1
, Robby T. Tan
1,2
, and Loong-Fah Cheong
1
1
National University of Singapore
2
Yale-NUS College
Abstract
Many methods have set state-of-the-art performance on
restoring images degraded by bad weather such as rain,
haze, fog, and snow, however they are designed specifically
to handle one type of degradation. In this paper, we pro-
pose a method that can handle multiple bad weather degra-
dations: rain, fog, snow and adherent raindrops using a
single network. To achieve this, we first design a generator
with multiple task-specific encoders, each of which is asso-
ciated with a particular bad weather degradation type. We
utilize a neural architecture search to optimally process the
image features extracted from all encoders. Subsequently,
to convert degraded image features to clean background
features, we introduce a series of tensor-based operations
encapsulating the underlying physics principles behind the
formation of rain, fog, snow and adherent raindrops. These
operations serve as the basic building blocks for our archi-
tectural search. Finally, our discriminator simultaneously
assesses the correctness and classifies the degradation type
of the restored image. We design a novel adversarial learn-
ing scheme that only backpropagates the loss of a degra-
dation type to the respective task-specific encoder. Despite
being designed to handle different types of bad weather, ex-
tensive experiments demonstrate that our method performs
competitively to the individual and dedicated state-of-the-
art image restoration methods.
1. Introduction
Bad weather image restoration problem has been stud-
ied intensively in the research fields of image processing
and computer vision; examples include deraining [20, 18,
59, 8, 53, 30, 50, 62, 36, 4, 44, 29], dehazing/defogging
[47, 3, 1, 57, 7, 13, 26, 43], desnowing [44, 37], and adher-
ent raindrops removal [41, 42], etc.. Most of these works
focus only on single weather types and propose dedicated
†
This work is supported by MOE2019-T2-1-130.
RainFog
Encoder
Raindrop
Encoder
Snow
Encoder
Generic
Decoder
All-in-One Network
Figure 1: High-level view of our network, with different
types of bad weather images as input and the respective
clean images as output. The proposed method is able to pro-
cess multiple types of bad weather images using the same
set of weights/parameters.
solutions [29, 43, 41]. While they can attain excellent per-
formance, they may not yield optimal results on other types
of bad weather degradations, since the factors that cause
the degradations in other types are not carefully considered.
As a result, real outdoor systems would have to decide and
switch between a series of bad weather image restoration
algorithms.
In this paper, we develop a single network-based method
to deal with many types of bad weather phenomena includ-
ing rain, fog, snow and adherent raindrop. It is worth noting
that a few recent studies attempt to recover multiple degra-
dation problems [39, 6]. However, none of them can deal
with multiple degradations with solely one set of pretrained
weights. To achieve our goal, we need to consider a few
factors related to our problem.
First, different bad weathers are formed based on differ-
ent physical principles, which means the degraded images
do not share the same characteristics. In order to yield the
4321
3175

optimal performance, we need to design the network ac-
cording to the underlying physics principles.
Second, bad weather image restoration can be consid-
ered as a many-to-one feature mapping problem, i.e., image
features from different bad weather domains (rain, fog, rain-
drop, snow) are transformed to clean image features by a set
of network parameters (multiple encoders), after which the
clean features are transformed to the clean natural images
(one decoder). Hence, it is critical to find a proper way to
process features from multiple domains and subject them to
further appropriate operations. This motivates us to design
an architectural-search approach that automatically finds an
optimal network architecture for the aforementioned task.
The basic building blocks for our network-search module
are made up of a series of fundamental operations that can
convert degraded image features to clean features based on
the physics characteristics of bad weather image degrada-
tion.
Third, most existing discriminators in GAN-based ap-
proaches are trained to judge whether the restored images
are real or not. However, it does not provide error signals for
the generative network to differentiate the images into dif-
ferent degradation types. The encoders may not be able to
update their learnable parameters based on its own assess-
ment of degradation type independently. To solve this prob-
lem, we propose a multi-class auxiliary discriminator that
can classify the image degradation type and judge the cor-
rectness of the restored image simultaneously. In addition,
unlike other existing GAN-based methods, our network has
multiple feature encoders, each of which corresponds to a
particular degradation type. When we backpropagate the
discriminative loss, the network propagates only the loss to
the respective encoder based on the classified results. Thus,
only the corresponding encoder will update the parameters
based on the adversarial loss; and, all the other encoders
will not be affected.
We summarise the contributions of our method as follow:
1. We propose an all-in-one bad weather removal method
that can deal with multiple bad weather conditions
(rain streaks, rain veiling effect, snow, and adherent
raindrop) in one network.
2. We propose a neural architecture search technique to
find the best architecture for processing the features us-
ing different weather encoders. A series of fundamen-
tal operations that result in features invariant to bad
weather are introduced. These fundamental operations
form the basic building blocks for the search.
3. We propose a novel end-to-end learning scheme that
can handle multiple bad weather image restoration
tasks. The key idea is to let the errors of the discrim-
inative loss backpropagate into a specific encoder, in
accordance to the type of the bad weather input.
2. Related Works
Deep learning based solutions have achieved promising
performance in various image processing problems such
as denoising [58], image completion [16], super-resolution
[21], deblurring [45], style transfer [9], etc. This is also true
for bad weather restoration or image enhancement, such as
dehazing [47, 3, 1, 57, 7, 13, 26, 43], removal of raindrop
and dirt [20, 18, 59, 8, 53, 30, 50, 62, 36, 4], of moderate
rain [44, 29], and of heavy rain [25, 54]. These recent works
have all shown the superiority of deep neural network mod-
els to conventional methods.
Rain Removal Kang et al.’s [20] is the first work to in-
troduce single image deraining method that decomposes an
input image into its low frequency and high-frequency com-
ponents using bilateral filter. Recent state-of-the-art rain re-
moval strategies are dominated by deep neural networks.
Fu et al.’s [8] develop a deep CNN to extract discrimina-
tive features from the high frequency component of the rain
image. Yang et al. [54] design a multi-task deep learn-
ing architecture that learns the location and intensity of rain
streaks simultaneously. Li et al.’s [25] propose a network
that addresses the rain streaks and rain veiling effects preva-
lent in heavy rain scenes. This method not only proposes
a residue decomposition step, but also elegantly integrates
the physics-based rain model and adversarial learning to
achieve state-of-the-art performance. It jointly learns the
physics parameters of heavy rain, including streak inten-
sity, transmission, atmospheric light and utilizes generative
adversarial network to bridge the domain gaps between the
proposed rain model and real rain.
Raindrop Removal There are a number of methods that
detect and remove raindrops from single image based on
traditional hand-crafted features [52, 55]. Eigen et al.’s [5]
train a CNN with pairs of raindrop-degraded images and
the corresponding raindrop-free images. Its network is a
fairly shallow model that only has 3 convolutional layers.
While the method works, particularly for relatively sparse
and small droplets as well as dirt, the result tends to be
blurry. Qian et al. [41] use attention maps in a GAN net-
work that successfully removes raindrop from single image.
However, the main drawback of this approach is the atten-
tion maps that are inherently difficult to obtain. The au-
tomatically computed attention map ground truth often re-
sults in poor quality. Quan et al.’s [42] further explore the
generation of attention maps based on the mathematical de-
scription of the shape of raindrops. It combines the raindrop
attention maps and detected raindrop edges to obtain state-
of-the-art performance of single image raindrop removal.
Snow Removal [2, 46] use HOG techniques to capture
characteristics of snow flakes for snow removal from sin-
gle images. Xu et al. [51] utilize color assumptions to
model the falling snow particles. In contrast to these hand-
crafted features that capture partial characteristic of snow
4322
3176

RainFog FE
Snow FE
Raindrop FE
Feature
Search
Discriminator
Decoder
Rainfog:T/F?
Snow:T/F?
Raindrop:T/F?
Clean:T/F?
Generator Discriminator
Full Architecture
RainFog FE
Snow FE
Raindrop FE
Input
Image
Task-Specific
Feature Extractor
Generic Decoder
Feature
Search
Clean
Image
Generator
Feature Search
Cell 0
Cell 1Cell -2
Figure 2: Left: Full architecture of the proposed network. The dotted lines indicate the back-propagation paths of the
adversarial loss from the discriminator. The discriminator classifies the degradation type and also determines whether the
input image is real or fake. The classification error of a particular degradation type is only used to update the corresponding
encoder assigned to this type in the generative network. The loss from one degradation type is only propagated to update the
corresponding feature encoder. Right: The detailed structure of the generator of the proposed network. In our experiment,
we set the number of cells to 3 ( cell -2, cell-1 and cell 0 ). FE stands for Feature Extractor.
flakes and streaks, Li et al.’s [24] encode snow flakes or
rain streaks using an online multi-scale convolutional sparse
coding model.
Neural Architecture Search (NAS) Neural Architecture
Search aims at automatically designing neural network ar-
chitectures to achieve optimal performance, while minimiz-
ing human hours and efforts. Early works like [63, 19, 11]
directly construct the entire network and train it automat-
ically with supervision from designing a reinforcement
learning controller RNN. Many recent papers [32, 40] point
out that searching the repeatable cell structure and fixing the
network level structure are more effective and efficient. The
PNAS method [32] proposed a progressive search that sig-
nificantly reduces the computation cost. Our work is closely
related to [34, 31] that further relax the network searching
task into an end-to-end optimization problem.
3. Proposed Method
3.1. Problem Formulation
Different weather phenomena degrade images accord-
ing to different physics principles. For example, a heavy
rain image (where rain veiling effect, visually similar to
fog/mist, is prevalent) is modelled as [25]:
I(x) = t(x)(J(x) +
X
i
R
i
(x)) + (1 − t(x))A, (1)
where I(x) is the rain image at location x, t is the transmis-
sion map and A is the global atmospheric light of the scene.
R
i
represents the rain streaks at the i-th layer along the line
of sight. An adherent raindrop image is modelled as [41] :
I(x) = (1 − M(x))J(x) + K(x), (2)
where I is the colored raindrop image and M is the binary
mask. J is the background image and K is the imagery
brought about by the adherent raindrops, representing the
blurred imagery formed the light reflected by the environ-
ment. Lastly, a snow image can be modelled as [37]:
I(x) = zS(x) + J(x)(1 − z), (3)
where S represents the snow flakes and z is a binary mask
indicating the location of snow.
From the formulations of these different bad weather im-
ages, it is evident that these problems do not share the same
intrinsic characteristics, which explains why a dedicated al-
gorithm designed for one task does not work on the other
tasks. To address this problem, we model the bad weather
tasks with the following generic function:
J(x) = F(I(x)), (4)
where F represents an auto-encoder that maps degraded im-
ages to clean background images, and should embody the
mentioned formulations such as Eq. (1)(2)(3). To realize
this, we consider a network with multiple encoders:
J(x) = G ⊙ E
ρ
(I
ρ
(x)), (5)
where E
ρ
represents the encoder that takes in a degraded im-
age I
ρ
with respect to a degradation type ρ. G is the generic
decoder that restores the input to a clean background image
J.
4323
3177
剩余10页未读,继续阅读
资源评论


DeepLearning小舟
- 粉丝: 2143
- 资源: 57
上传资源 快速赚钱
我的内容管理 展开
我的资源 快来上传第一个资源
我的收益
登录查看自己的收益我的积分 登录查看自己的积分
我的C币 登录后查看C币余额
我的收藏
我的下载
下载帮助


安全验证
文档复制为VIP权益,开通VIP直接复制
