Concise Paper: Freeway: Adaptively Isolating the Elephant and
Mice Flows on Different Transmission Paths
Wei Wang
∗‡
,Yi Sun
∗†
, Kai Zheng
§
, Mohamed Ali Kaafar
¶
, Dan Li
, Zhongcheng Li
∗
∗
Institute of Computing Technology, CAS,
†
State Key Laboratory of Networking and Switching Technology,
‡
University of Chinese Academy of Science,
§
IBM China Research Lab,
¶
NICTA, Australia,
Tsinghua University
Abstract—The network resource competition of today’ dat-
acenters is extremely intense between long-lived elephant flows
and latency-sensitive mice flows. Achieving both goals of high
throughput and low latency respectively for the two types of flows
requires compromise, which recent research has not successfully
solved mainly due to the transfer of elephant and mice flows
on shared links without any differentiation. However, current
datacenters usually adopt clos-based topology, e.g. Fat-tree/VL2,
so there exist multiple shortest paths between any pair of source
and destination. In this paper, we leverage on this observation
to propose a flow scheduling scheme, Freeway, to adaptively
partition the transmission paths into low latency paths and high
throughput paths respectively for the two types of flows. An
algorithm is proposed to dynamically adjust the number of the
two types of paths according to the real-time traffic. And based
on these separated transmission paths, we propose different flow-
type-specific scheduling and forwarding methods to make full
utilization of the bandwidth. Our simulation results show that
Freeway significantly reduces the delay of mice flow by 85.8%
and achieves 9.2% higher throughput compared with Hedera.
I. INTRODUCTION
Datacenters are widely deployed as large-scale computing
and storage facilities. More and more Internet services have
migrated to datacenters such as web, video, online social
network, etc. Datacenter network (DCN) interconnects thou-
sands of servers to provide high performance communication
for various applications, which has been a hot-spot in recent
research proposals.
Traffic in datacenters mainly consists of two types of flows
[7]: bulk data transfer (also called elephant flows, generated
by applications such as data backup and virtual machine
migration), and short-lived data exchange (also called mice
flows, generated by applications such web search and Map-
Reduce). Elephant flows require large bandwidth in order to
achieve an expected high throughput without a strict comple-
tion deadline. As shown by Benson et al in [5], the elephant
flows contribute to approximately 80% of the whole traffic
in datacenter networks. On the other hand, short-lived mice
flows are often very sensitive to latency and have to obey
to a deadline constraint. The latency requirements of these
applications have a significant impact on the users’ perceived
quality, which requires mice flows to be granted with higher
priorities than the long-lived elephant flows.
Resource competition and conflicts are inevitable when
simultaneously transmitting the two different types of flows
through the same path in DCNs. Specifically, high throughput
requires more packets queuing in the buffer, while low latency
requires ultra short queue to reduce queuing latency. So
far, scheduling algorithms design in DCN traded the latency
constraint of mice flows off the need for high throughput of
elephant flows or vice versa and considered this as a necessary
devil. Hedera [2], MicroTE [6] and HULL [4] propose a
few solutions where they transfer elephant flows and mice
flows through shared paths, making it challenging and complex
to find the optimal tradeoff between throughput and latency.
Even, exploring the tradeoff between throughput and latency
on the same path is not the best way to solve the problem.
In addition, DCN exhibits a different topology from that
of typical enterprise networks. Generally, there are multiple
shortest paths between any source and destination pair. For in-
stance, in a K-ary Fat-tree network [1], there are k
2
/4 shortest
paths between each pair of servers. Recent datacenter design
relies on a large number of commodity switches so that the
topology often takes the form of multi-rooted tree to achieve
horizontal scaling of hosts but decreasing aggregate bandwidth
moving up the hierarchy as noted by [2]. Therefore, with the
horizontal scaling consideration of datacenter topology, there
exist increasing number of shortest paths.
With such a large number of available paths to transmit the
traffic, in this paper we propose a new scheme named Freeway
to isolate and distinguish different transmission paths for
elephant flows from those for mice flows by dynamically par-
titioning the paths into low latency paths and high throughput
paths. Specifically, elephant flows are transferred through the
paths in which the corresponding links’ buffers accommodate
more queuing packets to achieve high throughput. Mice flows
are transferred through the paths in which the corresponding
buffers keep an ultra-low queue to achieve low latency. Free-
way efficiently utilises these isolated multipath resources in
DCN to achieve both low delay and high throughput. The main
contribution of Freeway consists of two parts:
Firstly, we propose a dynamic path partition algorithm for
elephant and mice flows. Our goal is to provide differentiated
transmission services through adaptive path isolation which
meets the demand of real-time traffic. The low latency paths
maintains a low queuing time requested by mice flow, and
high throughput paths builds up longer queues to accommodate
more packets to meet the demand of throughput hungry flows.
Path isolation in Freeway fundamentally decouples the two
conflicting goals, which enables us to independently optimise
the latency and throughput.
Secondly, based on the isolated paths, we propose to use
different scheduling and forwarding methods for the two types
of flows in both control plane and data plane. In Freeway,
elephant flows are scheduled in controller from a global view
of the network to get the optimal paths, while mice flows are
scheduled in local switch and forwarded to the least congested
output link on low latency paths.
The organisation of this paper is as follows: we present our
design of Freeway in Section II, describing the path partition
2014 IEEE 22nd International Conference on Network Protocols
978-1-4799-6204-4/14 $31.00 © 2014 IEEE
DOI 10.1109/ICNP.2014.59
362