Skip to main content

2014 | Buch

Passive and Active Measurement

15th International Conference, PAM 2014, Los Angeles, CA, USA, March 10-11, 2014, Proceedings

herausgegeben von: Michalis Faloutsos, Aleksandar Kuzmanovic

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 15th International Conference on Passive and Active Measurement, PAM 2014, held in Los Angeles, CA, USA, in 2014. The 24 revised full papers presented were carefully reviewed and selected from 76 submissions. The papers have been organized in the following topical sections: internet wireless and mobility; measurement design, experience and analysis; performance measurement; protocol and application behavior; characterization of network behavior; and network security and privacy. In addition 7 poster papers have been included.

Inhaltsverzeichnis

Frontmatter

Internet Wireless and Mobility

RadioProphet: Intelligent Radio Resource Deallocation for Cellular Networks

Traditionally, radio resources are released in cellular networks by statically configured inactivity timers, causing substantial resource inefficiencies. We propose a novel system

RadioProphet (RP)

, which dynamically and intelligently determines in real time when to deallocate radio resources by predicting the network idle time based on traffic history. We evaluate

RP

using 7- month-long real-world cellular traces. Properly configured,

RP

correctly predicts 85.9% of idle time instances and achieves radio energy savings of 59.1% at the cost of 91.0% of signaling overhead, outperforming existing proposals. We also implement and evaluate

RP

on real Android devices, demonstrating its negligible runtime overhead.

Junxian Huang, Feng Qian, Z. Morley Mao, Subhabrata Sen, Oliver Spatscheck
Mobile Network Performance from User Devices: A Longitudinal, Multidimensional Analysis

In the cellular environment, operators, researchers and end users have poor visibility into network performance for devices. Improving visibility is challenging because this performance depends factors that include carrier, access technology, signal strength, geographic location and time. Addressing this requires longitudinal, continuous and large-scale measurements from a diverse set of mobile devices and networks.

This paper takes a first look at cellular network performance from this perspective, using 17 months of data collected from devices located throughout the world. We show that (i) there is significant variance in key performance metrics both within and across carriers; (ii) this variance is at best only partially explained by regional and time-of-day patterns; (iii) the stability of network performance varies substantially among carriers. Further, we use the dataset to diagnose the causes behind observed performance problems and identify additional measurements that will improve our ability to reason about mobile network behavior.

Ashkan Nikravesh, David R. Choffnes, Ethan Katz-Bassett, Z. Morley Mao, Matt Welsh
Diagnosing Path Inflation of Mobile Client Traffic

As mobile Internet becomes more popular, carriers and content providers must engineer their topologies, routing configurations, and server deployments to maintain good performance for users of mobile devices. Understanding the impact of Internet topology and routing on mobile users requires broad, longitudinal network measurements conducted from mobile devices. In this work, we are the first to use such a view to quantify and understand the causes of geographically circuitous routes from mobile clients using 1.5 years of measurements from devices on 4 US carriers. We identify the key elements that can affect the Internet routes taken by traffic from mobile users (client location, server locations, carrier topology, carrier/content-provider peering). We then develop a methodology to diagnose the specific cause for inflated routes. Although we observe that the evolution of some carrier networks improves performance in some regions, we also observe many clients - even in major metropolitan areas - that continue to take geographically circuitous routes to content providers, due to limitations in the current topologies.

Kyriakos Zarifis, Tobias Flach, Srikanth Nori, David Choffnes, Ramesh Govindan, Ethan Katz-Bassett, Z. Morley Mao, Matt Welsh
An End-to-End Measurement Study of Modern Cellular Data Networks

With the significant increase in cellular data usage, it is critical to better understand the characteristics and behavior of cellular data networks. With both laboratory experiments and crowd-sourcing measurements, we investigated the characteristics of the cellular data networks for the three mobile ISPs in Singapore. We found that i) the transmitted packets tend to arrive in bursts; ii) there can be large variations in the instantaneous throughput over a short period of time; iii) large separate downlink buffers are typically deployed, which can cause high latency when the throughput is low; and iv) the networks typically implement some form of fair queuing policy.

Yin Xu, Zixiao Wang, Wai Kay Leong, Ben Leong

Measurement Design, Experience and Analysis

A Second Look at Detecting Third-Party Addresses in Traceroute Traces with the IP Timestamp Option

Artifacts in traceroute measurement output can lead to false inferences of AS-level links and paths when used to deduce AS topology. One traceroute artifact is caused by routers that respond to traceroute probes with a source address not in the path towards the destination, i.e. an off-path address. The most well-known traceroute artifact, the third-party address, is caused by off-path addresses that map to ASes not in the corresponding BGP path. In PAM 2013, Marchetta

et al.

proposed a technique to detect off-path addresses in traceroute paths [14]. Their technique assumed that a router IP address reported in a traceroute path towards a destination was off-path if, in a subsequent probe towards the same destination, the router did not insert a timestamp into a pre-specified timestamp option in the probe’s IP header. However, no standard precisely defines how routers should handle the pre-specified timestamp option, and implementations are inconsistent. Marchetta

et al.

claimed that most IP addresses in a traceroute path are off-path, and that consecutive off-path addresses are common. They reported no validation of their results. We cross-validate their approach with a first-principles approach, rooted in the assumption that subnets between connected routers are often /30 or /31 because routers are often connected with point-to-point links. We infer if an address in a traceroute path corresponds to the interface on a router that received the packet (the in-bound interface) by attempting to infer if its /30 or /31 subnet mate is an alias of the previous hop. We traceroute from 8 Ark monitors to 80K randomly chosen destinations, and find that most observed addresses are configured on the in-bound interface on a point-to-point link connecting two routers, i.e. are on-path. Because the technique from [14] reports 70.9%–74.9% of these addresses as being off-path, we conclude it is not reliable at inferring which addresses are off-path or third-party.

Matthew Luckie, kc claffy
Ingress Point Spreading: A New Primitive for Adaptive Active Network Mapping

Among outstanding challenges to Internet-wide topology mapping using active probes is balancing efficiency, e.g. induced load and time, with coverage. Toward maximizing probe utility, we introduce Ingress Point Spreading (IPS). IPS utilizes ingress diversity discovered in prior rounds of probing to rank-order available vantage points such that future probes traverse all known paths into a target network. We implement and deploy IPS to probe ~49k random prefixes drawn from the global BGP table using a distributed collection of vantage points. As compared to existing mapping systems, we discover 12% more unique vertices and 12% more edges using ~50% fewer probes, in half the time.

Guillermo Baltra, Robert Beverly, Geoffrey G. Xie
On Searching for Patterns in Traceroute Responses

We study active traceroute measurements from more than 1,000 vantage points towards a few targets over 24 hours or more. Our aim is to detect patterns in the data that correspond to significant operational events. Because traceroute data is complex and noisy, little work in this area has been published to date. First we develop a measure for the differences between successive traceroute measurements, then we use this measure to cluster changes across all vantage points and assess the meaning and descriptive power of these clusters. Large-scale operational events stand out clearly in our 3D visualisations; our clustering technique could be developed further to make such events visible to the operator community in near-real time.

Nevil Brownlee
Volume-Based Transit Pricing: Is 95 the Right Percentile?

The 95

th

percentile billing mechanism has been an industry

de facto

standard for transit providers for well over a decade. While the simplicity of the scheme makes it attractive as a billing mechanism, dramatic evolution in traffic patterns, associated interconnection practices and industry structure over the last two decades motivates an obvious question: is it still appropriate? In this paper, we evaluate the 95

th

percentile pricing mechanism from the perspective of transit providers, using a decade of traffic statistics from SWITCH (a large research/academic network), and more recent traffic statistics from 3 Internet Exchange Points (IXPs). We find that over time, heavy-inbound and heavy-hitter networks are able to achieve a lower 95th-to-average ratio than heavy-inbound and moderate-hitter networks, possibly due to their ability to better manage their traffic profile. The 95

th

percentile traffic volume also does not necessarily reflect the cost burden to the provider, motivating our exploration of an alternative metric that better captures the costs imposed on a network. We define the

provision ratio

for a customer, which captures its contribution to the provider’s peak load.

Vamseedhar Reddyvari Raja, Amogh Dhamdhere, Alessandra Scicchitano, Srinivas Shakkottai, kc claffy, Simon Leinen

Performance Measurement

Dissecting Round Trip Time on the Slow Path with a Single Packet

Researchers and operators often measure

Round Trip Time

when monitoring, troubleshooting, or otherwise assessing network paths. However, because it combines all hops traversed along both the forward and reverse path, it can be difficult to interpret or to attribute delay to particular path segments.

In this work, we present an approach using a single packet to dissect the RTT in chunks mapped to specific portions of the path. Using the IP Prespecified Timestamp option directed at intermediate routers, it provides RTT estimations along portions of the slow path. Using multiple vantage points (116 PlanetLab nodes), we show that the proposed approach can be applied on more than 77% of the considered paths. Finally, we present preliminary results for two use cases (home network contribution to the RTT and per-Autonomous System RTT contribution) to demonstrate its potential in practical scenarios.

Pietro Marchetta, Alessio Botta, Ethan Katz-Bassett, Antonio Pescapé
Is Our Ground-Truth for Traffic Classification Reliable?

The validation of the different proposals in the traffic classification literature is a controversial issue. Usually, these works base their results on a ground-truth built from private datasets and labeled by techniques of unknown reliability. This makes the validation and comparison with other solutions an extremely difficult task. This paper aims to be a first step towards addressing the validation and trustworthiness problem of network traffic classifiers. We perform a comparison between 6 well-known DPI-based techniques, which are frequently used in the literature for ground-truth generation. In order to evaluate these tools we have carefully built a labeled dataset of more than 500 000 flows, which contains traffic from popular applications. Our results present

PACE

, a commercial tool, as the most reliable solution for ground-truth generation. However, among the open-source tools available,

NDPI

and especially

Libprotoident

, also achieve very high precision, while other, more frequently used tools (e.g.,

L7-filter

) are not reliable enough and should not be used for ground-truth generation in their current form.

Valentín Carela-Español, Tomasz Bujlow, Pere Barlet-Ros
Detecting Intentional Packet Drops on the Internet via TCP/IP Side Channels

We describe a method for remotely detecting intentional packet drops on the Internet

via

side channel inferences. That is, given two arbitrary IP addresses on the Internet that meet some simple requirements, our proposed technique can discover packet drops (

e.g.

, due to censorship) between the two remote machines, as well as infer in which direction the packet drops are occurring. The only major requirements for our approach are a client with a global IP Identifier (IPID) and a target server with an open port. We require no special access to the client or server. Our method is robust to noise because we apply intervention analysis based on an autoregressive-moving-average (ARMA) model. In a measurement study using our method featuring clients from multiple continents, we observed that, of all measured client connections to Tor directory servers that were censored, 98% of those were from China, and only 0.63% of measured client connections from China to Tor directory servers were not censored. This is congruent with current understandings about global Internet censorship, leading us to conclude that our method is effective.

Roya Ensafi, Jeffrey Knockel, Geoffrey Alexander, Jedidiah R. Crandall
The Need for End-to-End Evaluation of Cloud Availability

People’s computing lives are moving into the cloud, making understanding cloud availability increasingly critical. Prior studies of Internet outages have used ICMP-based pings and traceroutes. While these studies can detect network availability, we show that they can be inaccurate at estimating

cloud

availability. Without care, ICMP probes can

underestimate

availability because ICMP is not as robust as application-level measurements such as HTTP. They can

overestimate

availability if they measure reachability of the cloud’s edge, missing failures in the cloud’s back-end. We develop methodologies sensitive to five “nines” of reliability, and then we compare ICMP and end-to-end measurements for both cloud VM and storage services. We show case studies where one fails and the other succeeds, and our results highlight the importance of application-level retries to reach high precision. When possible, we recommend end-to-end measurement with application-level protocols to evaluate the availability of cloud services.

Zi Hu, Liang Zhu, Calvin Ardi, Ethan Katz-Bassett, Harsha V. Madhyastha, John Heidemann, Minlan Yu

Protocol And Application Behavior

Exposing Inconsistent Web Search Results with Bobble

Given their critical role as gateways to Web content, the search results a Web search engine provides to its users have an out-sized impact on the way each user views the Web. Previous studies have shown that popular Web search engines like Google employ sophisticated personalization engines that can occasionally provide dramatically inconsistent views of the Web to different users. Unfortunately, even if users are aware of this potential, it is not straightforward for them to determine the extent to which a particular set of search results differs from those returned to other users, nor the factors that contribute to this personalization.

We present the design and implementation of Bobble, a Web browser extension that contemporaneously executes a user’s Google search query from a variety of different world-wide vantage points under a range of different conditions, alerting the user to the extent of inconsistency present in the set of search results returned to them by Google. Using more than 75,000 real search queries issued by over 170 users during a nine-month period, we explore the frequency and nature of inconsistencies that arise in Google search queries. In contrast to previously published results, we find that 98% of all Google search results display some inconsistency, with a user’s geographic location being the dominant factor influencing the nature of the inconsistency.

Xinyu Xing, Wei Meng, Dan Doozan, Nick Feamster, Wenke Lee, Alex C. Snoeren
Modern Application Layer Transmission Patterns from a Transport Perspective

We aim to broadly study the ways that modern applications use the underlying protocols and networks. Such an understanding is necessary when designing and optimizing lower-layer protocols. Traditionally—as prior work shows—applications have been well represented as bulk transfers, often preceded by application-layer handshaking. Recent suggestions posit that application evolution has eclipsed this simple model, and a typical pattern is now a series of transactions over a single transport layer connection. In this initial study we examine application transmission patterns via packet traces from two networks to better understand the ways that modern applications use TCP.

Matt Sargent, Ethan Blanton, Mark Allman
Third-Party Identity Management Usage on the Web

Many websites utilize third-party identity management services to simplify access to their services. Given the privacy and security implications for end users, an important question is how websites select their third-party identity providers and how this impacts the characteristics of the emerging identity management landscape seen by the users. In this paper we first present a novel Selenium-based data collection methodology that identifies and captures the identity management relationships between sites and the intrinsic characteristics of the websites that form these relationships. Second, we present the first large-scale characterization of the third-party identity management landscape and the relationships that makes up this emerging landscape. As a reference point, we compare and contrast our observations with the somewhat more understood third-party content provider landscape. Interesting findings include a much higher skew towards websites selecting popular identity provider sites than is observed among content providers, with sites being more likely to form identity management relationships that have similar cultural, geographic, and general site focus. These findings are both positive and negative. For example, the high skew in usage places greater responsibility on fewer organizations that are responsible for the increased information leakage cost associated with highly aggregated personal information, but also reduces the user’s control of the access to this information.

Anna Vapen, Niklas Carlsson, Anirban Mahanti, Nahid Shahmehri
Understanding the Reachability of IPv6 Limited Visibility Prefixes

The main functionality of the Internet is to provide global connectivity for every node attached to it. In light of the IPv4 address space depletion, large networks are in the process of deploying IPv6. In this paper we perform an extensive analysis of how BGP route propagation affects global reachability of the active IPv6 address space in the context of this unique transition of the Internet infrastructure. We propose and validate a methodology for testing the reachability of an IPv6 address block active in the routing system. Leveraging the global visibility status of the IPv6 prefixes evaluated with the BGP Visibility Scanner, we then use this methodology to verify if the visibility status of the prefix impacts its reachability at the interdomain level. We perform active measurements using the RIPE Atlas platform. We test destinations with different BGP visibility degrees (i.e., limited visibility - LV, high visibility - HV and dark prefixes). We show that the IPv6 LV prefixes (v6LVPs) are generally reachable, mostly due to a less-specific

HV

covering prefix (v6HVP). However, this is not the case of the dark address space, which, by not having a covering v6HVP is largely unreachable.

Andra Lutu, Marcelo Bagnulo, Cristel Pelsser, Olaf Maennel

Characterization of Network Behavior

Violation of Interdomain Routing Assumptions

We challenge a set of assumptions that are frequently used to model interdomain routing in the Internet by confronting them with routing decisions that are actually taken by ASes, as revealed through publicly available BGP feeds. Our results quantify for the first time the extent to which such assumptions are too simple to model real-world Internet routing policies. This should introduce a note of caution into future work that makes these assumptions and should prompt attempts to find more accurate models.

Riad Mazloum, Marc-Olivier Buob, Jordan Augè, Bruno Baynat, Dario Rossi, Timur Friedman
Here Be Web Proxies

HTTP proxies serve numerous roles, from performance enhancement to access control to network censorship, but often operate

stealthily

without explicitly indicating their presence to the communicating endpoints. In this paper we present an analysis of the evidence of proxying manifest in executions of the ICSI Netalyzr spanning 646,000 distinct IP addresses (“clients”). To identify proxies we employ a range of detectors at the transport and application layer, and report in detail on the extent to which they allow us to fingerprint and map proxies to their likely intended uses. We also analyze 17,000 clients that include a novel proxy location technique based on traceroutes of the responses to TCP connection establishment requests, which provides additional clues regarding the purpose of the identified web proxies. Overall, we see 14% of Netalyzr-analyzed clients with results that suggest the presence of web proxies.

Nicholas Weaver, Christian Kreibich, Martin Dam, Vern Paxson
Towards an Automated Investigation of the Impact of BGP Routing Changes on Network Delay Variations

Understanding fluctuations in network performance is important as many applications, including streaming, conferencing, gaming, and financial transactions, rely on timely delivery of data. Awareness of the effect of routing changes on network delays is key to this understanding, but research in this area is often based on empirical observations that cannot be easily extended to everyday network scenarios.

We study the relationship between BGP routing changes and round-trip times (RTTs), bringing several contributions: 1) an automated methodology that exploits state-of-the-art statistical methods to determine if a routing change caused a significant RTT variation; 2) an application of our methodology on massive RIPE RIS and RIPE Atlas data sets, showing its effectiveness in the wild (for example, at least 72.5% of the unique routing changes were consistently associated with an RTT increase – or decrease – in all their occurrences); 3) various a-posteriori analyses leading to interesting findings for several practical applications.

Massimo Rimondini, Claudio Squarcella, Giuseppe Di Battista
Peering at the Internet’s Frontier: A First Look at ISP Interconnectivity in Africa

In developing regions, the performance to commonly visited destinations is dominated by the network latency, which in turn depends on the connectivity from ISPs in these regions to the locations that host popular sites and content. We take a first look at ISP interconnectivity between various regions in Africa and discover many circuitous Internet paths that should remain local often detour through Europe. We investigate the causes of circuitous Internet paths and evaluate the benefits of increased peering and better cache proxy placement for reducing latency to popular Internet sites.

Arpit Gupta, Matt Calder, Nick Feamster, Marshini Chetty, Enrico Calandro, Ethan Katz-Bassett

Network Security and Privacy

Assessing DNS Vulnerability to Record Injection

The Domain Name System (DNS) is a critical component of the Internet infrastructure as it maps human-readable names to IP addresses. Injecting fraudulent mappings allows an attacker to divert users from intended destinations to those of an attacker’s choosing. In this paper, we measure the Internet’s vulnerability to DNS record injection attacks—including a new attack we uncover. We find that record injection vulnerabilities are fairly common—even years after some of them were first uncovered.

Kyle Schomp, Tom Callahan, Michael Rabinovich, Mark Allman
How Vulnerable Are Unprotected Machines on the Internet?

How vulnerable are unprotected machines on the Internet? Utilizing Amazon’s Elastic Compute Cloud (EC2) service and our own VMware ESXi server, we launched and monitored 18 Windows machines (Windows 2008, XP and 7) without anti-virus or firewall protection at two distinct locations on the Internet—in the cloud and on-premise. Some machines ran a wide-open configuration with all ports open and services emulated, while others had a out-of-the-box configuration with default ports and services. After launching, all machines received port scans within minutes and vulnerability probes within a couple of hours. Although all machines with wide-open configurations attracted exploitations within a day, machines with out-of-the-box configurations observed very few vulnerability exploitations regardless of their locations. From our months-long experiment we found that: a) attackers are constantly searching for victims; b) the more opening ports/listening services a machine has, the more risks it is exposed to; c) brute-force logins are the most common type of attack; d) exploitations targeting vulnerabilities of software or operating systems are not widely observed.

Yuanyuan Grace Zeng, David Coffey, John Viega
A Closer Look at Third-Party OSN Applications: Are They Leaking Your Personal Information?

We examine third-party Online Social Network (OSN) applications for two major OSNs: Facebook and RenRen. These third-party applications typically gather, from the OSN, user personal information. We develop a measurement platform to study the interaction between OSN applications and fourth parties. We use this platform to study the behavior of 997 Facebook applications and 377 RenRen applications. We find that the Facebook and RenRen applications interact with hundreds of different fourth-party tracking entities. More worrisome, 22% of Facebook applications and 69% of RenRen applications provide users’ personal information to one or more fourth-party tracking entities.

Abdelberi Chaabane, Yuan Ding, Ratan Dey, Mohamed Ali Kaafar, Keith W. Ross
On the Effectiveness of Traffic Analysis against Anonymity Networks Using Flow Records

We investigate the feasibility of mounting a de-anonymization attack against Tor and similar low-latency anonymous communication systems by using NetFlow records. Previous research has shown that adversaries with the ability to eavesdrop in real time at a few internet exchange points can effectively monitor a significant part of the network paths from Tor nodes to destination servers. However, the capacity of current networks makes packet-level monitoring at such a scale quite challenging. We hypothesize that adversaries could use less accurate but readily available monitoring facilities, such as Cisco’s NetFlow, to mount large-scale traffic analysis attacks. In this paper, we assess the feasibility and effectiveness of traffic analysis attacks against Tor using NetFlow data. We present an active traffic analysis technique based on perturbing the characteristics of user traffic at the server side, and observing a similar perturbation at the client side through statistical correlation. We evaluate the accuracy of our method using both in-lab testing and data gathered from a public Tor relay serving hundreds of users. Our method revealed the actual sources of anonymous traffic with 100% accuracy for the in-lab tests, and achieved an overall accuracy of 81.6% for the real-world experiments with a false positive rate of 5.5%.

Sambuddho Chakravarty, Marco V. Barbera, Georgios Portokalidis, Michalis Polychronakis, Angelos D. Keromytis

Poster Abstracts

Scaling Bandwidth Estimation to High Speed Networks

Existing bandwidth estimation tools fail to perform well at gigabit and higher network speeds. In this paper we study several sources of noise that must be overcome by these tools in high-speed envrionments and propose strategies for addressing them. We evaluate our Linux implementation on 1 and 10Gbps testbed networks, showing that our strategies help significantly in scaling bandwidth estimation to high-speed networks.

Qianwen Yin, Jasleen Kaur, F. Donelson Smith
Scalable Accurate Consolidation of Passively Measured Statistical Data

Passive probes continuously collect a significant amount of traffic volume, and autonomously generate statistics on a large number of metrics. A common statistical output of passive probe is represented by probability mass functions (pmf). The need for consolidation of several pmfs arises in two contexts, namely: (i) whenever a central point collects and aggregates measurement of multiple disjoint vantage points, and (ii) whenever a local measurement processed at a single vantage point needs to be distributed over multiple cores of the same physical probe, in order to cope with growing link capacity. Taking an experimental approach, we study both cases assessing the impact of different consolidation strategies, obtaining general design and tuning guidelines.

Silvia Colabrese, Dario Rossi, Marco Mellia
A Needle in the Haystack - Delay Based User Identification in Cellular Networks

In this work, we discuss a technique for identifying users in cellular networks that exploits the effect that RRC state machine transitions have on the measured round-trip time of mobile devices. Our preliminary experiments performed in a controlled environment, show that it is possible to leverage popular real-time messaging apps, such as Facebook, WhatsApp and Viber, to trigger an observable delay pattern on a user’s device, and use it to identify the device.

Marco V. Barbera, Simone Bronzini, Alessandro Mei, Vasile C. Perta
Understanding HTTP Traffic and CDN Behavior from the Eyes of a Mobile ISP

Today’s Internet is dominated by HTTP services and Content Delivery Networks (CDNs). Popular web services like Facebook and YouTube are hosted by highly distributed CDNs like Akamai and Google. Understanding this new complex Internet scenario is paramount for network operators, to control the traffic on their networks and to improve the quality experienced by their customers, specially when something goes wrong. This paper studies the most popular HTTP services and their underlying hosting networks, through the analysis of a full week of HTTP traffic traces collected at an operational mobile network.

Pedro Casas, Pierdomenico Fiadino, Arian Bär
On Understanding User Interests through Heterogeneous Data Sources

User interests can be learned from multiple sources, each of them presenting only partial facets. We propose an approach to merge user information from disparate data sources to enable a more complete, enriched view of user interests. Using our approach, we show that merging different sources results in three times of more interest categories in user profiles than with each single source and that merged profiles can capture much more common interests among a group of users, which is key to group profiling.

Samamon Khemmarat, Sabyasachi Saha, Han Hee Song, Mario Baldi, Lixin Gao
Nightlights: Entropy-Based Metrics for Classifying Darkspace Traffic Patterns

An IP darkspace is a globally routed IP address space with no active hosts. All traffic destined to darkspace addresses is unsolicited and often originates from network scanning or attacks. A sudden increases of different types of darkspace traffic can serve as indicator of new vulnerabilities, misconfigurations or large scale attacks. In our analysis we take advantage of the fact that darkspace traffic typically originates from processes that use randomly chosen addresses or ports (e.g. scanning) or target a specific address or port (e.g. DDoS, worm spreading). These behaviors induce a concentration or dispersion in feature distributions of the resulting traffic aggregate and can be distinguished using entropy as a compact representation. Its lightweight, unambiguous, and privacy-compatible character makes entropy a suitable metric that can facilitate early warning capabilities, operational information exchange among network operators, and comparison of analysis results among a network of distributed IP darkspaces.

Tanja Zseby, Nevil Brownlee, Alistair King, kc claffy
Distributed Active Measurement of Internet Queuing Delays

Despite growing link capacities, over-dimensioned buffers are still causing, in the Internet of the second decade of the third millennium, hosts to suffer from severe queuing delays (or bufferbloat). While maximum bufferbloat possibly exceeds few seconds, it is far less clear how often this maximum is hit in practice. This paper reports on our ongoing work to build a spatial and temporal map of Internet bufferbloat, describing a system based on distributed agents running on PlanetLab that aims at providing a quantitative answer to the above question.

Pellegrino Casoria, Dario Rossi, Jordan Augé, Marc-Olivier Buob, Timur Friedman, Antonio Pescapé
Backmatter
Metadaten
Titel
Passive and Active Measurement
herausgegeben von
Michalis Faloutsos
Aleksandar Kuzmanovic
Copyright-Jahr
2014
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-04918-2
Print ISBN
978-3-319-04917-5
DOI
https://doi.org/10.1007/978-3-319-04918-2

Neuer Inhalt