Optimizing multimedia content delivery over next-generation optical networks
Citation:
Emanuele Di Pascale, 'Optimizing multimedia content delivery over next-generation optical networks', [thesis], Trinity College (Dublin, Ireland). School of Computer Science & Statistics, 2015, pp 113Download Item:
Pascale, Emanuele Di_TCD-SCSS-PHD-2015-01.pdf (PDF) 3.601Mb
Pascale TCD THESIS 10883 Optimizing multimedia.pdf (Scan of TCD Library print copy) 70.32Mb
Abstract:
This thesis analyzes the performance of a Peer-to-Peer (P2P) multimedia content delivery system for a
network architecture based on next-generation Passive Optical Networks (PONs).
A PON is an optical access technology that is able to deliver high bandwidth capacities at a fraction
of the cost of traditional point-to-point fiber solutions; this is achieved by sharing the same feeder fiber
among several customers through the use of optical splitters. Established standards such as GPON
and EPON have a reach from the Central Office of about 20 Km and a fan-out of around 32-64 users.
Next-generation PONs aim to increase this reach, in order to enable the consolidation of central offices;
to increase the split size, in order to help reducing the cost per customer; and to increase the bandwidth
capacity to multiple 10 Gbps channels.
One of the reasons why operators are investing on Fiber-to-the-X (FTTX) solutions is to increase the
capacity of the access section in order to remove what traditionally was the bandwidth bottleneck of the
network. Over the last decade multimedia streaming services have become increasingly popular, with
companies like Netflix, Amazon Prime and many more reaching millions of customers and billions of
revenues. With both the number of active subscribers and the quality of the streamed videos steadily on
the rise, network infrastructures are being put under an increasing strain to support these data-intensive
services. Several reports have shown that the vast majority of the data currently traversing the Internet
on a daily basis is related in one form or another to multimedia content retrieval.
Increasing the capacity in the access is certainly going to remove one of the obstacles to high-definition
streaming of multimedia content. However, as users start to take advantage of the increased bandwidth
allowance granted to them by fiber, the aggregation section of the network (i.e., the core) will start to
suffer; and as we do not have any better technology than fiber, the only way we have to improve core
capacity is to stack up network equipment, a process that is neither efficient nor cost-effective. It is hence
imperative to find alternative solutions that will allow us to support bandwidth-intensive multimedia
services while keeping operators’ costs to a minimum, ensuring that these services are sustainable in the
long term.
One promising strategy consists in placing caches managed by the Internet Service Provider (ISP)
at the edge of the network. Once content has been delivered to a customer, it can be stored and redistributed
to other users in the area to minimize bandwidth consumption in the core. More specifically, in
this work we show that, by reserving a small amount (4-16 GB) of the storage space typically available on most Set-Top Boxes (STBs), and by allowing users to cache multimedia content that they requested
for their own personal consumption, we can greatly improve the efficiency of these next-generation networks.
Indeed, the combined effect of the large symmetric upstream/downstream capacity of PONs and
the customer aggregation brought by long reach feeders greatly increase the efficacy of locality-awareness
– i.e., a strategy by which content requests are redirected to local available sources whenever possible.
More specifically, symmetric access bandwidth means that a single source is in principle sufficient to
provide all the upload capacity that a requester is able to handle; at the same time, the consolidated
architecture that results from bypassing the metro section greatly expands the pool of potential sources
attached to the access section of the requester, thus increasing the chances of a local data transfer.
Furthermore, since the caches are managed by the network operator itself and integrated into the equipment
required to connect to the Internet, they are less susceptible to churn and allow for a simpler
implementation of locality-aware policies.
Extensive simulation campaigns, carried out first using a steady-state analyzer and subsequently
through our custom event-driven simulator PLACeS, show that a locality-aware P2P strategy can confine
most multimedia traffic inside the metro/core node from which the requests originated, thus drastically
reducing core bandwidth utilization. An energy consumption model taking into account both static and
dynamic consumption of electronic devices was formulated, showing that locality-aware P2P is able to
reduce the overall power required to run the network and to offer cost-saving opportunities for operators.
In order to implement locality-aware policies for content delivery, we postulate the existence of
an oracle service, whose responsibilities include keeping track of the content of each user cache and
matching requests with a local available source whenever possible. This thesis explores some of the issues
that might arise when designing such a service, and it proposes two proof-of-concept implementations
based on OpenFlow. In particular, the aim of these implementations is to show that such an oracle
could be designed to be transparent to the underlying multimedia applications, allowing it to be used
in conjunction with legacy services that were never expected to be locality-aware. Furthermore, using
OpenFlow allows us to integrate the oracle functionality with the control plane of the operator’s network,
i.e., allowing us to implement redirection policies that can react to anomalous load conditions.
Finally, we present an optimization algorithm to reduce the amount of caching storage required to
implement our proposed P2P solution. The algorithm takes as input the number of requests observed
by the oracle service for previous elements of the catalog, and estimates future request patterns for
elements with the same popularity rank. This information is then used to determine which contents
should be cached at each STB. Our simulations show that such a solution is able to achieve the same
levels of locality of a traditional caching eviction policy (such as Least Frequently Used or LFU) while
reducing the amount of storage required by up to 77%. Even higher storage savings of up to 92% can
be achieved if we are willing to accept a reduction of about 5 to 8% in the percentage of requests served
locally to the access segment of the requester.
Author: Di Pascale, Emanuele
Advisor:
Ruffini, MarcoQualification name:
Doctor of Philosophy (Ph.D.)Publisher:
Trinity College (Dublin, Ireland). School of Computer Science & StatisticsNote:
TARA (Trinity's Access to Research Archive) has a robust takedown policy. Please contact us if you have any concerns: rssadmin@tcd.ieType of material:
thesisCollections:
Availability:
Full text availableLicences: