In this paper, we aim to tackle the bulk data transfer
problem in optical circuit-switched networks with assistive
storage, i.e., bulk data transfer through SnF OCS. Our con-
tributions are summarized as follows:
1) We propose a routing framework, named the time-
shifted multilayer graph (TS-MLG), for bulk data
routing and scheduling in optical circuit-switched
networks with assistive storage. This framework is
based on a multilayer graph built from a set of snap-
shots (i.e., layers) of the dynamics in a network. By
performing shortest path routing on the TS-MLG,
“end-to-end” paths over time and space are found
for requests. With the TS-MLG, a single routing proc-
ess can realize both spatial assignments and temporal
arrangements for bandwidth and storage, greatly
simplifying the provisioning process in SnF OCS
networks.
2) We study how the number of layers used for routing
affects the network blocking performance. Numerical
simulations show that the majority of requests
can be served within the first two layers. In our
simulations, request blocking can be avoided within
29 layers. Simulations also show that limiting the
number of layers used for routing can reduce the com-
putational complexity. We demonstrate a trade-off
between computational complexity and blocking per-
formance.
3) We investigate how the number of layers in a graph is
affected by traffic characteristics and link capacity.
Simulation results show that lighter traffic load and
simpler traffic characteristics result in fewer layers
being required to achieve a certain blocking perfor-
mance. Given the amount of traffic, increasing link
capacity can also reduce the layers required to achieve
a certain blocking performance.
The remainder of this paper is organized as follows.
In Section II, we briefly review the existing efforts on bulk
data transfer with assistive storage. We argue that assis-
tive storage plays a significant role in delivering bulk data,
in both packet-switched and circuit-switched networks.
In Section III, we present the dynamics of the TS-MLG. In
Section IV, we show how to route with the TS-MLG and
present numerical analysis and discussions in Section V.
We discuss related work in Section VI. Section VII con-
cludes this paper.
II. BULK DATA TRANSFER WITH ASSISTIVE STORAGE
Bulk data, generated from datacenter backups, e-Science
applications, and the like, are often delay tolerant [10].
Taking advantage of the delay tolerance can improve net-
work performance or cost effectiveness while delivering such
data. In this section, we summarize the work on bulk data
transfer with assistive storage and the various goals effec-
tively achieved by this approach. These results motivate us
to build a generic framework under which the problem of
delivering bulk data in networks with assistive storage
can be studied.
Time-shift circuit switching [3] was proposed to shift
the data transfer on a link to times when the bandwidth
was available by utilizing assistive storage. Time-shift
advance reservation mechanisms were designed and re-
ported in [4,5] to relax the time constraints on inter-
mediate nodes and improve resource utilization. While
these efforts [4,5] assumed perfect prior knowledge of
the network and unlimited storage to simplify the prob-
lems, some practical approaches based on existing tech-
niques were introduced to maintain up-to-date resource
availability information (e.g., the amount of bandwidth
available on each link).
The work in [11] illustrated that shifting delay-tolerant
traffic to nonpeak load hours was an effective way to reduce
wasteful usage of resources during peak hours. Two solu-
tions were proposed to shift delay-tolerant traffic. The first
one offered flat-rate compatible incentives to the end users.
The second one equipped the network with assistive stor-
age, which could allow ISPs to perform SnF relaying of
delay-tolerant traffic. In [10], the authors developed ana-
lytical models for transferring bulk data through single-
hop and single-path transfers and showed the huge
potential of assistive storage for transferring multiterabyte
data on a daily basis at no additional cost but without con-
sidering any routing or scheduling issues. They proposed
NetStitcher in [8], a system that employed assistive stor-
age to stitch together unutilized bandwidth for bulk data
transfer.
The work in [6,7,12] optimized decisions for transmitting
multiple requests with different constraints. Postcard [7]
modeled a cost minimization problem for inter-data-center
traffic scheduling through SnF. This work revealed that
SnF outperformed the other approaches, especially when
the link capacity was more limited or data were more
delay tolerant. In [12], a bulk data transfer system employ-
ing SnF was built based on the Beacon platform and
OpenFlow APIs, with practical online algorithms to opti-
mize routing. In [6], inter-data-center bulk data traffic
was balanced in two dimensions: temporally, by applying
SnF to reduce the peak traffic load on the link and spatially
by lexicographically minimizing the congestion of all
links. An ISP friendly scheme, named D4D [13], was pro-
posed to reduce the inter-domain traffic between data cen-
ters via SnF.
The work in [3,6,8,14] and [15] considered the effect of
storage capacity on the network performance. The impact
of storage capacity on the network throughput was inves-
tigated in [3]. The optimization problems in [8] and [6] took
into account the storage capacity constraint. The authors
in [14] also aimed at transferring bulk data at minimal cost
via SnF. They studied multiple types of storage models,
such as flat-fee storage and storage with time-varying
capacities and costs. The work in [15] proved that storage
under certain conditions could increase the amount of data
delivered within a given time horizon or reduce the in-
curred delay for the delivery of a certain amount of data.
In [15], a joint storage and routing policy was designed
to maximize the amount of data transferred. In addition,
[3–5
] and [
6–8,12–15] assumed time-slotted transfers, in
Lin et al. VOL. 8, NO. 3/MARCH 2016/J. OPT. COMMUN. NETW. 163