Skip to main content

Über dieses Buch

This book constitutes the proceedings of the 17th International Conference on Passive and Active Measurement, PAM 2016, held in Heraklion, Crete, Greece, in March/April 2016.
The 30 full papers presented in this volume were carefully reviewed and selected from 93 submissions. They are organized in topical sections named: security and privacy; mobile and cellular; the last mile; testbeds and frameworks; web; DNS and routing; IXPs and MPLS; and scheduling and timing.



Security and Privacy


Exploring Tor’s Activity Through Long-Term Passive TLS Traffic Measurement

Tor constitutes one of the pillars of anonymous online communication. It allows its users to communicate while concealing from observers their location as well as the Internet resources they access. Since its first release in 2002, Tor has enjoyed an increasing level of popularity with now commonly more than 2,000,000 simultaneous active clients on the network. However, even though Tor is widely popular, there is only little understanding of the large-scale behavior of its network clients. In this paper, we present a longitudinal study of the Tor network based on passive analysis of TLS traffic at the Internet uplinks of four large universities inside and outside of the US. We show how Tor traffic can be identified by properties of its autogenerated certificates, and we use this knowledge to analyze characteristics and development of Tor’s traffic over more than three years.
Johanna Amann, Robin Sommer

Measuring the Latency and Pervasiveness of TLS Certificate Revocation

Today, Transport-Layer Security (TLS) is the bedrock of Internet security for the web and web-derived applications. TLS depends on the X.509 Public Key Infrastructure (PKI) to authenticate endpoint identity. An essential part of a PKI is the ability to quickly revoke certificates, for example, after a key compromise. Today the Online Certificate Status Protocol (OCSP) is the most common way to quickly distribute revocation information. However, prior and current concerns about OCSP latency and privacy raise questions about its use. We examine OCSP using passive network monitoring of live traffic at the Internet uplink of a large research university and verify the results using active scans. Our measurements show that the median latency of OCSP queries is quite good: only 20 ms today, much less than the 291 ms observed in 2012. This improvement is because content delivery networks (CDNs) serve most OCSP traffic today; our measurements show 94 % of queries are served by CDNs. We also show that OCSP use is ubiquitous today: it is used by all popular web browsers, as well as important non-web applications such as MS-Windows code signing.
Liang Zhu, Johanna Amann, John Heidemann

Tracking Personal Identifiers Across the Web

User tracking has become de facto practice of the Web, however, our understanding of the scale and nature of this practice remains rudimentary. In this paper, we explore the connections amongst all parties of the Web, especially focusing on how trackers share user IDs. Using data collected from both browsing histories of 129 users and active experiments, we identify user-specific IDs that we suspect are used to track users. We find a significant amount of ID-sharing practices across different organisations providing various service categories. Our observations reveal that ID-sharing happens in a large scale regardless of the user profile size and profile condition such as logged-in and logged-out. We unexpectedly observe a higher number of ID-sharing domains when user is logged-out. We believe that our work reveals the huge gap between what is known about user tracking and what is done by this complex and important ecosystem.
Marjan Falahrastegar, Hamed Haddadi, Steve Uhlig, Richard Mortier

Like a Pack of Wolves: Community Structure of Web Trackers

Web trackers are services that monitor user behavior on the web. The information they collect is ostensibly used for customization and targeted advertising. Due to rising privacy concerns, users have started to install browser plugins that prevent tracking of their web usage. Such plugins tend to address tracking activity by means of crowdsourced filters. While these tools have been relatively effective in protecting users from privacy violations, their crowdsourced nature requires significant human effort, and provide no fundamental understanding of how trackers operate. In this paper, we leverage the insight that fundamental requirements for trackers’ success can be used as discriminating features for tracker detection. We begin by using traces from a mobile web proxy to model user browsing behavior as a graph. We then perform a transformation on the extracted graph that reveals very well-connected communities of trackers. Next, after discovering that trackers’ position in the transformed graph significantly differentiates them from “normal” vertices, we design an automated tracker detection mechanism using two simple algorithms. We find that both techniques for automated tracker detection are quite accurate (over 97 %) and robust (less than 2 % false positives). In conjunction with previous research, our findings can be used to build robust, fully automated online privacy preservation systems.
Vasiliki Kalavri, Jeremy Blackburn, Matteo Varvello, Konstantina Papagiannaki

Mobile and Cellular


A First Analysis of Multipath TCP on Smartphones

Multipath TCP is a recent TCP extension that enables multihomed hosts like smartphones to send and receive data over multiple interfaces. Despite the growing interest in this new TCP extension, little is known about its behavior with real applications in wireless networks. This paper analyzes a trace from a SOCKS proxy serving smartphones using Multipath TCP. This first detailed study of real Multipath TCP smartphone traffic reveals several interesting points about its behavior in the wild. It confirms the heterogeneity of wireless and cellular networks which influences the scheduling of Multipath TCP. The analysis shows that most of the additional subflows are never used to send data. The amount of reinjections is also quantified and shows that they are not a major issue for the deployment of Multipath TCP. With our methodology to detect handovers, around a quarter of the connections using several subflows experience data handovers.
Quentin De Coninck, Matthieu Baerts, Benjamin Hesmans, Olivier Bonaventure

Crowdsourcing Measurements of Mobile Network Performance and Mobility During a Large Scale Event

Cellular infrastructure in urban areas is provisioned to easily cope with the usual daily demands. When facing shockingly high loads, e.g., due to large scale sport or music events, users complain about performance degradations of the mobile network. Analyzing the impact of large scale events on the mobile network infrastructure and how users perceive overload situations is essential to improve user experience. Therefore, a large data set is required to get a detailed understanding of the differences between providers, mobile devices, mobile network access technologies, and the mobility of people.
In this paper, we present experiences and results from a crowdsourcing measurement during a music festival in Germany with over 110,000 visitors per day. More than 1,000 visitors ran our crowdsourcing app to collect active and passive measurements of the mobile network and the user mobility. We show that there is significant performance degradation during the festival regarding DNS and HTTP failures as well as increased load times. Furthermore, we evaluate the impact of the carrier, the access technology, and the user mobility on the perceived performance.
Alexander Frömmgen, Jens Heuschkel, Patrick Jahnke, Fabio Cuozzo, Immanuel Schweizer, Patrick Eugster, Max Mühlhäuser, Alejandro Buchmann

A Study of MVNO Data Paths and Performance

Characterization of mobile data traffic performance is difficult given the inherent complexity and opacity of mobile networks, yet it is increasingly important as emerging wireless standards approach wireline-like latencies. Mobile virtual network operators (MVNOs) increase mobile network topology complexity due to additional infrastructure and network configurations. We collect and analyze traces on mobile carriers in the United States along with MVNO networks on each of the base carriers in order to discover differences in network performance and behavior. Ultimately, we find that traffic on MVNO networks takes more circuitous, less efficient paths to reach content servers compared to base operators. Factors such as location of the destination server as well as the provider network design are critical in better understanding behaviors and implications on performance for each of the mobile carriers.
Paul Schmitt, Morgan Vigil, Elizabeth Belding

Detecting Cellular Middleboxes Using Passive Measurement Techniques

The Transmission Control Protocol (TCP) follows the end-to-end principle – when a client establishes a connection with a server, the connection is only shared by two physical machines, the client and the server. In current cellular networks, a myriad of middleboxes disregard the end-to-end principle to enable network operators to deploy services such as content caching, compression, and protocol optimization to improve end-to-end network performance. If server operators remain unaware of such middleboxes, TCP connections may not be optimized specifically for middleboxes and instead are optimized for mobile devices. We argue that without costly active measurement, it remains challenging for server operators to reliably detect the presence of middleboxes that split TCP connections. In this paper, we present three techniques (based on latency, loss, and characteristics of TCP SYN packets) for server operators to passively identify Connection Terminating Proxies (CTPs) in cellular networks, with the goal to optimize TCP connections for faster content delivery. Using TCP and HTTP logs recorded by Content Delivery Network (CDN) servers, we demonstrate that our passive techniques are as reliable and accurate as active techniques in detecting CTPs deployed in cellular networks worldwide.
Utkarsh Goel, Moritz Steiner, Mike P. Wittie, Martin Flack, Stephen Ludin

The Last Mile


Home Network or Access Link? Locating Last-Mile Downstream Throughput Bottlenecks

As home networks see increasingly faster downstream throughput speeds, a natural question is whether users are benefiting from these faster speeds or simply facing performance bottlenecks in their own home networks. In this paper, we ask whether downstream throughput bottlenecks occur more frequently in their home networks or in their access ISPs. We identify lightweight metrics that can accurately identify whether a throughput bottleneck lies inside or outside a user’s home network and develop a detection algorithm that locates these bottlenecks. We validate this algorithm in controlled settings and report on two deployments, one of which included 2,652 homes across the United States. We find that wireless bottlenecks are more common than access-link bottlenecks—particularly for home networks with downstream throughput greater than 20 Mbps, where access-link bottlenecks are relatively rare.
Srikanth Sundaresan, Nick Feamster, Renata Teixeira

A Case Study of Traffic Demand Response to Broadband Service-Plan Upgrades

Internet service providers are facing mounting pressure from regulatory agencies to increase the speed of their service offerings to consumers; some are beginning to deploy gigabit-per-second speeds in certain markets, as well. The race to deploy increasingly faster speeds begs the question of whether users are exhausting the capacity that is already available. Previous work has shown that users who are already maximizing their usage on a given access link will continue to do so when they are migrated to a higher service tier.
In a unique controlled experiment involving thousands of Comcast subscribers in the same city, we analyzed usage patterns of two groups: a control group (105 Mbps) and a randomly selected treatment group that was upgraded to 250 Mbps without their knowledge. We study how users who are already on service plans with high downstream throughput respond when they are upgraded to a higher service tier without their knowledge, as compared to a similar control group. To our surprise, the difference between traffic demands between both groups is higher for subscribers with moderate traffic demands, as compared to high-volume subscribers. We speculate that even though these users may not take advantage of the full available capacity, the service-tier increase generally improves performance, which causes them to use the Internet more than they otherwise would have.
Sarthak Grover, Roya Ensafi, Nick Feamster

eXploring Xfinity

A First Look at Provider-Enabled Community Networks
Several broadband providers have been offering community WiFi as an additional service for existing customers and paid subscribers. These community networks provide Internet connectivity on the go for mobile devices and a path to offload cellular traffic. Rather than deploying new infrastructure or relying on the resources of an organized community, these provider-enabled community WiFi services leverage the existing hardware and connections of their customers. The past few years have seen a significant growth in their popularity and coverage and some municipalities and institutions have started to consider them as the basis for public Internet access.
In this paper, we present the first characterization of one such service – the Xfinity Community WiFi network. Taking the perspectives of the home-router owner and the public hotspot user, we characterize the performance and availability of this service in urban and suburban settings, at different times, between September, 2014 and 2015. Our results highlight the challenges of providing these services in urban environments considering the tensions between coverage and interference, large obstructions and high population densities. Through a series of controlled experiments, we measure the impact to hosting customers, finding that in certain cases, the use of the public hotspot can degrade host network throughput by up-to 67 % under high traffic on the public hotspot.
Dipendra K. Jha, John P. Rula, Fabián E. Bustamante

NAT Revelio: Detecting NAT444 in the ISP

In this paper, we propose NAT Revelio, a novel test suite and methodology for detecting NAT deployments beyond the home gateway, also known as NAT444 (e.g., Carrier Grade NAT). Since NAT444 solutions may impair performance for some users, understanding the extent of NAT444 deployment in the Internet is of interest to policymakers, ISPs, and users. We perform an initial validation of the NAT Revelio test suite within a controlled NAT444 trial environment involving operational residential lines managed by a large operator in the UK. We leverage access to a unique SamKnows deployment in the UK and collect information about the existence of NAT444 solutions from 2,000 homes and 26 ISPs. To demonstrate the flexibility of NAT Revelio, we also deployed it in project BISmark, an open platform for home broadband internet research. We analyze the results and discuss our findings.
Andra Lutu, Marcelo Bagnulo, Amogh Dhamdhere, K. C. Claffy

Testbeds and Frameworks


GPLMT: A Lightweight Experimentation and Testbed Management Framework

Conducting experiments in federated, distributed, and heterogeneous testbeds is a challenging task for researchers. Researchers have to take care of the whole experiment life cycle, ensure the reproducibility of each run, and the comparability of the results. We present GPLMT, a flexible and lightweight framework for managing testbeds and the experiment life cycle. GPLMT provides an intuitive way to formalize experiments. The resulting experiment description is portable across varying experimentation platforms. GPLMT enables researchers to manage and control networked testbeds and resources, and conduct experiments on large-scale, heterogeneous, and distributed testbeds. We state the requirements and the design of GPLMT, describe the challenges of developing and using such a tool, and present selected user studies along with their experience of using GPLMT in varying scenarios. GPLMT is free and open source software and can be obtained from the project’s GitHub repository.
Matthias Wachs, Nadine Herold, Stephan-A. Posselt, Florian Dold, Georg Carle

Periscope: Unifying Looking Glass Querying

Looking glasses (LG) servers enhance our visibility into Internet connectivity and performance by offering a set of distributed vantage points that allow both data plane and control plane measurements. However, the lack of input and output standardization and limitations in querying frequency have hindered the development of automated measurement tools that would allow systematic use of LGs. In this paper we introduce Periscope, a publicly-accessible overlay that unifies LGs into a single platform and automates the discovery and use of LG capabilities. The system architecture combines crowd-sourced and cloud-hosted querying mechanisms to automate and scale the available querying resources. Periscope can handle large bursts of requests, with an intelligent controller coordinating multiple concurrent user queries without violating the various LG querying rate limitations. As of December 2015 Periscope has automatically extracted 1,691 LG nodes in 297 Autonomous Systems. We show that Periscope significantly extends our view of Internet topology obtained through RIPE Atlas and CAIDA’s Ark, while the combination of traceroute and BGP measurements allows more sophisticated measurement studies.
Vasileios Giotsas, Amogh Dhamdhere, K. C. Claffy

Analyzing Locality of Mobile Messaging Traffic using the MATAdOR Framework

Mobile messaging services have gained a large share in global telecommunications. Unlike conventional services like phone calls, text messages or email, they do not feature a standardized environment enabling a federated and potentially local service architecture. We present an extensive and large-scale analysis of communication patterns for four popular mobile messaging services between 28 countries and analyze the locality of communication and the resulting impact on user privacy. We show that server architectures for mobile messaging services are highly centralized in single countries. This forces messages to drastically deviate from a direct communication path, enabling hosting and transfer countries to potentially intercept and censor traffic. To conduct this work, we developed a measurement framework to analyze traffic of such mobile messaging services. It allows to carry out automated experiments with mobile messaging applications, is transparent to those applications and does not require any modifications to the applications.
Quirin Scheitle, Matthias Wachs, Johannes Zirngibl, Georg Carle



Scout: A Point of Presence Recommendation System Using Real User Monitoring Data

This paper describes, Scout, a statistical modeling driven approach to automatically recommend new Point of Presence (PoP) centers for web sites. PoPs help reduce a website’s page download time dramatically. However, where to build the new PoP centers given the current assets of existing ones is a problem that has rarely been studied in a quantitative and principled way before; it was mainly done through empirical studies or through applying industry experience and intuitions. In this paper, we propose a novel approach that estimates the impact of the PoP centers by building a statistical model using the real user monitoring data collected by the web sites and recommend the next PoPs to build. We also consider the problem of recommending PoPs using other metrics such as user’s number of page views. We show empirically that our approach works well, by experiments that use real data collected from millions of user visits in a major social network site.
Yang Yang, Liang Zhang, Ritesh Maheshwari, Zaid Ali Kahn, Deepak Agarwal, Sanjay Dubey

Is the Web HTTP/2 Yet?

Version 2 of the Hypertext Transfer Protocol (HTTP/2) was finalized in May 2015 as RFC 7540. It addresses well-known problems with HTTP/1.1 (e.g., head of line blocking and redundant headers) and introduces new features (e.g., server push and content priority). Though HTTP/2 is designed to be the future of the web, it remains unclear whether the web will—or should—hop on board. To shed light on this question, we built a measurement platform that monitors HTTP/2 adoption and performance across the Alexa top 1 million websites on a daily basis. Our system is live and up-to-date results can be viewed at [1]. In this paper, we report findings from an 11 month measurement campaign (November 2014 – October 2015). As of October 2015, we find 68,000 websites reporting HTTP/2 support, of which about 10,000 actually serve content with it. Unsurprisingly, popular sites are quicker to adopt HTTP/2 and 31 % of the Alexa top 100 already support it. For the most part, websites do not change as they move from HTTP/1.1 to HTTP/2; current web development practices like inlining and domain sharding are still present. Contrary to previous results, we find that these practices make HTTP/2 more resilient to losses and jitter. In all, we find that 80 % of websites supporting HTTP/2 experience a decrease in page load time compared with HTTP/1.1 and the decrease grows in mobile networks.
Matteo Varvello, Kyle Schomp, David Naylor, Jeremy Blackburn, Alessandro Finamore, Konstantina Papagiannaki

Modeling HTTP/2 Speed from HTTP/1 Traces

With the standardization of HTTP/2, content providerswant to understand the benefits and pitfalls of transitioning to the new standard. Using a large dataset of HTTP/1.1 resource timing data from production traffic on Akamai’s CDN, and a model of HTTP/2 behavior, we obtain the distribution of performance differences between the protocol versions for nearly 280,000 downloads. We find that HTTP/2 provides significant performanceimprovements in the tail, and, for websites for which HTTP/2 does not improve median performance, we explore how optimizations like prioritization and push can improve performance, and how these improvements relate to page structure.
Kyriakos Zarifis, Mark Holland, Manish Jain, Ethan Katz-Bassett, Ramesh Govindan

Behind Box-Office Sales: Understanding the Mechanics of Automation Spam in Classifieds

In spite of being detrimental to user experiences, the problem of automated messages on online classified websites is widespread due to a low barrier of entry and limited enforcement-of-rules against such messages. Many of these messages may appear legitimate, but turn into spam when they are posted redundantly. This behavior drowns out other legitimate users from having their voices heard. We label this problem as automation spam – legitimate messages that are posted at a rate that overwhelms normal posts. In this paper, we characterize automation on a popular classifieds website, Craigslist, and find that 2/3rd of the posts with URLs are automated. Automation is most prevalent in categories dominated by businesses, such as Tickets, Cars by Dealer, and Real Estate, with 67–92 % of the posts with URLs exhibiting automation. Even in categories with less automation, intermittent automation still overwhelms non-automated users, demonstrating that no category is safe.
Andrew J. Kaizer, Minaxi Gupta, Mejbaol Sajib, Anirban Acharjee, Qatrunnada Ismail

DNS and Routing


Towards a Model of DNS Client Behavior

The Domain Name System (DNS) is a critical component of the Internet infrastructure as it maps human-readable hostnames into the IP addresses the network uses to route traffic. Yet, the DNS behavior of individual clients is not well understood. In this paper, we present a characterization of DNS clients with an eye towards developing an analytical model of client interaction with the larger DNS ecosystem. While this is initial work and we do not arrive at a DNS workload model, we highlight a variety of behaviors and characteristics that enhance our mental models of how DNS operates and move us towards an analytical model of client-side DNS operation.
Kyle Schomp, Michael Rabinovich, Mark Allman

Detecting DNS Root Manipulation

We present techniques for detecting unauthorized DNS root servers in the Internet using primarily endpoint-based measurements from RIPE Atlas, supplemented with BGP routing announcements from RouteViews and RIPE RIS. The first approach analyzes the latency to the root server and the second approach looks for route hijacks. We demonstrate the importance and validity of these techniques by measuring the only root server (“B”) not widely distributed using anycast. Our measurements establish the presence of several DNS proxies and a DNS root mirror.
Ben Jones, Nick Feamster, Vern Paxson, Nicholas Weaver, Mark Allman

Behind IP Prefix Overlaps in the BGP Routing Table

The IP space has been divided and assigned as a set of IP prefixes. Due to the longest prefix match forwarding rule, a single assigned IP prefix can be further divided into multiple distinct IP spaces; resulting in a BGP routing table that contains over half a million distinct, but overlapping entries. Another side-effect of this forwarding rule is that any anomalous announcement can result in a denial of service for the prefix owner. It is thus essential to describe and clarify the use of these overlapping prefixes. In order to do this, we use Internet Routing Registries (IRR) databases as semantic data to group IP prefixes into families of prefixes that are owned by the same organization. We use BGP data in order to populate these families with prefixes that are announced on the Internet. We introduce several metrics which enable us to study how these families behave. With these metrics, we detail how organisations prefer to subdivide their IP space, underlining global trends in IP space management. We show that there is a large amount of information in the IRR that appears to be actively maintained by a number of ISPs.
Quentin Jacquemart, Guillaume Urvoy-Keller, Ernst Biersack

Characterizing Rule Compression Mechanisms in Software-Defined Networks

Software-defined networking (SDN) separates the network policy specification from its configuration and gives applications control over the forwarding rules that route traffic. On large networks that host several applications, the number of rules that network switches must handle can easily exceed tens of thousands. Most switches cannot handle rules of this volume because the complex rule matching in SDN (e.g., wildcards, diverse match fields) requires switches to store rules on TCAM, which is expensive and limited in size.
We perform a measurement study using two real-world network traffic traces to understand the effectiveness and side-effects of manual and automatic rule compression techniques. Our results show that not using any rule management mechanism is likely to result in a rule set that does not fit on current OpenFlow switches. Using rule expiration timeouts reduces the configuration footprint on a switch without affecting rule semantics but at the expense of up to 40 % increase in control channel overhead. Other manual (e.g., wildcards, limiting match fields) or automatic (e.g., combining similar rules) mechanisms introduce negligible overhead but change the original configuration and may misdirect less than 1 % of the flows. Our work uncovers trade-offs critical to both operators and programmers writing network policies that must satisfy both infrastructure and application constraints.
Curtis Yu, Cristian Lumezanu, Harsha V. Madhyastha, Guofei Jiang



Blackholing at IXPs: On the Effectiveness of DDoS Mitigation in the Wild

DDoS attacks remain a serious threat not only to the edge of the Internet but also to the core peering links at Internet Exchange Points (IXPs). Currently, the main mitigation technique is to blackhole traffic to a specific IP prefix at upstream providers. Blackholing is an operational technique that allows a peer to announce a prefix via BGP to another peer, which then discards traffic destined for this prefix. However, as far as we know there is only anecdotal evidence of the success of blackholing.
Largely unnoticed by research communities, IXPs have deployed blackholing as a service for their members. In this first-of-its-kind study, we shed light on the extent to which blackholing is used by the IXP members and what effect it has on traffic.
Within a 12 week period we found that traffic to more than 7, 864 distinct IP prefixes was blackholed by 75 ASes. The daily patterns emphasize that there are not only a highly variable number of new announcements every day but, surprisingly, there are a consistently high number of announcements (\(>1000\)). Moreover, we highlight situations in which blackholing succeeds in reducing the DDoS attack traffic.
Christoph Dietzel, Anja Feldmann, Thomas King

Dissecting the Largest National Ecosystem of Public Internet eXchange Points in Brazil

Many efforts are devoted to increase the understanding of the complex and evolving Internet ecosystem. Internet eXchange Points (IXP) are shared infrastructures where Autonomous Systems (AS) implement peering agreements for their traffic exchange. In recent years, IXPs have become an increasing research target since they represent an interesting microcosm of the Internet diversity and a strategic vantage point to deliver end-user services. In this paper, we analyze the largest set of public IXPs in a single country, namely the project in Brazil. Our in-depth analyses are based on BGP data from all looking glass servers and provide insights into the peering ecosystem per IXP and from a nation-wide perspective. We propose a novel peering affinity metric well-suited to measure the connectivity between different types of ASes. We found lower values of peering density in compared to more mature ecosystems, such as AMS-IX, DE-CIX, LINX, and MSK-IX. Our final contribution is sharing the 15 GB dataset along all supporting code.
Samuel Henrique Bucke Brito, Mateus A. S. Santos, Ramon dos Reis Fontes, Danny A. Lachos Perez, Christian Esteve Rothenberg

traIXroute: Detecting IXPs in traceroute paths

Internet eXchange Points (IXP) are critical components of the Internet infrastructure that affect its performance, evolution, security and economics. In this work, we introduce techniques to augment the well-known traceroute tool with the capability of identifying if and where exactly IXPs are crossed in end-to-end paths. Knowing this information can help end-users have more transparency over how their traffic flows in the Internet. Our tool, called traIXroute, exploits data from the PeeringDB (PDB) and the Packet Clearing House (PCH) about IXP IP addresses of BGP routers, IXP members, and IXP prefixes. We show that the used data are both rich, i.e., we find 12,716 IP addresses of BGP routers in 460 IXPs, and mostly accurate, i.e., our validation shows 92–93 % accuracy. In addition, 78.2 % of the detected IXPs in our data are based on multiple diverse evidence and therefore help have higher confidence on the detected IXPs than when relying solely on IXP prefixes. To demonstrate the utility of our tool, we use it to show that one out of five paths in our data cross an IXP and that paths do not normally cross more than a single IXP, as it is expected based on the valley-free model about Internet policies. Furthermore, although the top IXPs both in terms of paths and members are located in Europe, US IXPs attract many more paths than their number of members indicates.
George Nomikos, Xenofontas Dimitropoulos

A Brief History of MPLS Usage in IPv6

Recent researches have stated the fast deployment of IPv6. It has been demonstrated that IPv6 grows much faster, being so more and more adopted by both Internet service providers but also by servers and end-hosts. In parallel, researches have been conducted to discover and assess the usage of MPLS tunnels. Indeed, recent developments in the ICMP protocol make certain categories of MPLS tunnels transparent to traceroute probing. However, these studies focus only on IPv4, where MPLS is strongly deployed.
In this paper, we provide a first look at how MPLS is used under IPv6 networks using traceroute data collected by CAIDA. At first glance, we observe that the MPLS deployment and usage seem to greatly differ between IPv4 and IPv6, in particular in the way MPLS label stacks are used. While label stacks with at least two labels are marginal in IPv4 (and mostly correspond to a VPN usage), they are prevalent in IPv6. After a deeper analysis of the label stack typical content in IPv6, we show that such tunnels result from the use of 6PE. This is not really surprising since this mechanism was specifically designed to forward IPv6 traffic using MPLS tunnels through networks that are not fully IPv6 compliant. However, we show that it does not result from non dual-stack routers but rather from the absence of native IPv6 MPLS signaling protocols. Finally, we investigate a large Tier-1 network, Cogent, that stands out with an original set-up.
Yves Vanaubel, Pascal Mérindol, Jean-Jacques Pansiot, Benoit Donnet

Scheduling and Timing


An Empirical Study of Android Alarm Usage for Application Scheduling

Android applications often rely on alarms to schedule background tasks. Since Android KitKat, applications can opt-in for deferrable alarms, which allows the OS to perform alarm batching to reduce device awake time and increase the chances of network traffic being generated simultaneously by different applications. This mechanism can result in significant battery savings if appropriately adopted.
In this paper we perform a large scale study of the 22,695 most popular free applications in the Google Play Market to quantify whether expectations of more energy efficient background app execution are indeed warranted. We identify a significant chasm between the way application developers build their apps and Android’s attempt to address energy inefficiencies of background app execution. We find that close to half of the applications using alarms do not benefit from alarm batching capabilities. The reasons behind this is that (i) they tend to target Android SDKs lagging behind by more than 18 months, and (ii) they tend to feature third party libraries that are using non-deferrable alarms.
Mario Almeida, Muhammad Bilal, Jeremy Blackburn, Konstantina Papagiannaki

Network Timing and the 2015 Leap Second

Using a testbed with reference timestamping, we collected timing data from public Stratum-1 NTP servers during the leap second event of end-June 2015. We found a wide variety of anomalous server-side behaviors, both at the NTP protocol level and in the server clocks themselves, which can last days or even weeks after the event. Out of 176 servers, only 61% had no erroneous behavior related to the leap second event that we could detect.
Darryl Veitch, Kanthaiah Vijayalayan

Can Machine Learning Benefit Bandwidth Estimation at Ultra-high Speeds?

Tools for estimating end-to-end available bandwidth (AB) send out a train of packets and observe how inter-packet gaps change over a given network path. In ultra-high speed networks, the fine inter-packet gaps are fairly susceptible to noise introduced by transient queuing and bursty cross-traffic. Past work uses smoothing heuristics to alleviate the impact of noise, but at the cost of requiring large packet trains. In this paper, we consider a machine-learning approach for learning the AB from noisy inter-packet gaps. We conduct extensive experimental evaluations on a 10 Gbps testbed, and find that supervised learning can help realize ultra-high speed bandwidth estimation with more accuracy and smaller packet trains than the state of the art. Further, we find that when training is based on: (i) more bursty cross-traffic, (ii) extreme configurations of interrupt coalescence, a machine learning framework is fairly robust to the cross-traffic, NIC platform, and configuration of NIC parameters.
Qianwen Yin, Jasleen Kaur


Weitere Informationen