Skip to main content

2013 | Buch

Algorithms and Architectures for Parallel Processing

13th International Conference, ICA3PP 2013, Vietri sul Mare, Italy, December 18-20, 2013, Proceedings, Part II

herausgegeben von: Rocco Aversa, Joanna Kołodziej, Jun Zhang, Flora Amato, Giancarlo Fortino

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This two volume set LNCS 8285 and 8286 constitutes the proceedings of the 13th International Conference on Algorithms and Architectures for Parallel Processing , ICA3PP 2013, held in Vietri sul Mare, Italy in December 2013. The first volume contains 10 distinguished and 31 regular papers selected from 90 submissions and covering topics such as big data, multi-core programming and software tools, distributed scheduling and load balancing, high-performance scientific computing, parallel algorithms, parallel architectures, scalable and distributed databases, dependability in distributed and parallel systems, wireless and mobile computing. The second volume consists of four sections including 35 papers from one symposium and three workshops held in conjunction with ICA3PP 2013 main conference. These are 13 papers from the 2013 International Symposium on Advances of Distributed and Parallel Computing (ADPC 2013), 5 papers of the International Workshop on Big Data Computing (BDC 2013), 10 papers of the International Workshop on Trusted Information in Big Data (TIBiDa 2013) as well as 7 papers belonging to Workshop on Cloud-assisted Smart Cyber-Physical Systems (C-Smart CPS 2013).

Inhaltsverzeichnis

Frontmatter

2013 International Symposium on Advances of Distributed and Parallel Computing (ADPC 2013)

Frontmatter
On the Impact of Optimization on the Time-Power-Energy Balance of Dense Linear Algebra Factorizations

We investigate the effect that commonoptimization techniques for general-purpose multicore processors (either manual, compiler-driven, in the form of highly tuned libraries, or orchestrated by a runtime) exert on the performance-power-energy trade-off of dense linear algebra routines. The algorithm employed for this analysis is matrix inversion via Gauss-Jordan elimination, but the results from the evaluation carry beyond this particular operation and are representative for a variety of dense linear algebra computations, especially, dense matrix factorizations.

Peter Benner, Pablo Ezzatti, Enrique Quintana-Ortí, Alfredo Remón
Torus-Connected Cycles: An Implementation-Friendly Topology for Interconnection Networks of Massively Parallel Systems

The number of nodes inside supercomputers is continuously increasing. As detailed in the TOP500 list, there are now systems that include more than one million nodes; for instance China’s Tianhe-2. To cope with this huge number of cores, many interconnection networks have been proposed in the literature. However, in most cases, proposed topologies have shown gaps preventing these topologies from being actually implemented and manufactured. In this paper, we propose a new,

implementation-friendly

, topology for interconnection networks of massively parallel systems: torus-connected cycles (TCC). Torus-based networks have proven very popular in the recent years: the Fujitsu K and Cray Titan are two examples of supercomputers whose interconnection networks are based on the torus topology.

Antoine Bossard, Keiichi Kaneko
Optimization of Tasks Scheduling by an Efficacy Data Placement and Replication in Cloud Computing

The Cloud Computing systems are in the process of becoming an important platform for scientific applications. Optimization problems of data placement and task scheduling in a heterogeneous environment such as cloud are difficult problems. Approaches for scheduling and data placement is often highly correlated, which take into account a few factors at the same time, and what are the most often adapted to applications data medium and therefore goes not to scale. The objective of this work is to propose an optimization approach that takes into account an effective data placement and scheduling of tasks by replication in Cloud environments.

Esma Insaf Djebbar, Ghalem Belalem
A Normalization Scheme for the Non-symmetric s-Step Lanczos Algorithm

The Lanczos algorithm is among the most frequently used techniques for computing a few dominant eigenvalues of a large sparse non-symmetric matrix. When variants of this algorithm are implemented on distributed-memory computers, the synchronization time spent in computing dot products is increasingly limiting the parallel scalability. The goal of

s

-step algorithms is to reduce the harmful influence of dot products on the parallel performance by grouping several of these operations for joint execution; thus, plummeting synchronization time when using a large number of processes. This paper extends the non-symmetric

s

-step Lanczos method introduced by Kim and Chronopoulos (J. Comput. Appl. Math., 42(3), 357–374, 1992) by a novel normalization scheme. Compared to the unnormalized algorithm, the normalized variant improves numerical stability and reduces the possibility of breakdowns.

Stefan Feuerriegel, H. Martin Bücker
Efficient Hybrid Breadth-First Search on GPUs

Breadth-first search (BFS) is a basic algorithm for graph processing. It is a very important algorithm because a number of graph-processing algorithms use breadth-first search as a sub-routine. Recently, large-scale graphs have been used in various fields, and there is a growing need for an efficient approach by which to process large-scale graphs. In the present paper, we present a hybrid BFS implementation on a GPU for efficient traversal of a complex network, and we achieved a speedup of up to 29x, as compared to the previous GPU implementation. We also applied an implementation for GPUs on a distributed memory system. This implementation achieved a speed of 117.546 GigaTEPS on a 256-node HA-PACS cluster with 1,024 NVIDIA M2090 GPUs and was ranked 39th on the June 2013 Graph500 list.

Takaaki Hiragushi, Daisuke Takahashi
Adaptive Resource Allocation for Reliable Performance in Heterogeneous Distributed Systems

The rapid development of distributed systems has triggered the emergence of many new applications such as Cloud applications. Satisfaction on these systems in regards their services is an important indicator that reflect quality of IT resource management. In this paper, we address a reliability issue in the context of resource allocation that aims to improve performance of the distributed systems. We propose a heuristic scheduling by integrating different mapping and queuing strategies into allocation policy for suitably matching tasks and resources. Dynamic resource discovery and task classification are incorporated into the heuristic scheduling in pursuit of reliable decisions. Simulation experiments show that our approach achieves better response time and utilization compared to other heuristic approaches.

Masnida Hussin, Azizol Abdullah, Shamala K. Subramaniam
Adaptive Task Size Control on High Level Programming for GPU/CPU Work Sharing

On the work sharing among GPUs and CPU cores on GPU equipped clusters, it is a critical issue to keep load balance among these heterogeneous computing resources. We have been developing a run-time system for this problem on PGAS language named XcalableMP-dev/StarPU [1]. Through the development, we found the necessity of adaptive load balancing for GPU/CPU work sharing to achieve the best performance for various application codes.

In this paper, we enhance our language system XcalableMP-dev/ StarPU to add a new feature which can control the task size to be assigned to these heterogeneous resources dynamically during application execution. As a result of performance evaluation on several benchmarks, we confirmed the proposed feature correctly works and the performance with heterogeneous work sharing provides up to about 40% higher performance than GPU-only utilization even for relatively small size of problems.

Tetsuya Odajima, Taisuke Boku, Mitsuhisa Sato, Toshihiro Hanawa, Yuetsu Kodama, Raymond Namyst, Samuel Thibault, Olivier Aumage
Robust Scheduling of Dynamic Real-Time Tasks with Low Overhead for Multi-Core Systems

Real-time embedded systems often require the ability of adaptiveness and robustness, because their interactions with physical environments dynamically change workloads. Multi-core chips are becoming an ideal candidate hardware component for such environments, since each of them carries two or more cores on a single die, and has potential for providing execution parallelism as well as better performance at low cost. Parallelism, on the other hand, necessitates complex analysis of computation problems, such as task scheduling, while improving the realization of adaptive controls. Pfair is an optimal scheduling algorithm that can fully utilize all cores in the system, but it incurs excessive scheduling overheads which, in turn, diminishes its practicality in embedded systems. To mitigate this problem, the hybrid partitioned–global Pfair (HPGP) scheduler was proposed in previous work, which significantly reduces the number of task migrations and global scheduling points by performing global scheduling only when absolutely necessary, while still achieving full processor utilization. In this paper, the HPGP scheduler is further extended to support the adaptive controls to dynamic real-time task systems. Experimental evaluation results have shown that the extended HPGP can successfully handle dynamic task systems, thus making it suitable for embedded real-time systems.

Sangsoo Park
A Routing Strategy for Inductive-Coupling Based Wireless 3-D NoCs by Maximizing Topological Regularity

Inductive-coupling is a 3D integration technique that can stack more than three known-good-dies in a System-in-Package (SiP) without wire connections. To make the best use of wireless property, the 3D ICs would have a great flexibility to customize the number of processor chips, SRAM chips, and DRAM chips in a SiP after the chip fabrication.

In this work, we design a deadlock-free routing protocol for such 3-D chips in which each chip has different Network-on-Chip (NoC) topologies. We classify two-surface NoC of each chip into is 2D mesh and irregular structure in order to apply the custom routing algorithm to its topology. In mesh topologies, X-Y routing is used for well traffic distribution, while Up*/Down* routing is applied for reduced path hops in irregular topologies. Evaluation results show that the average number of hop count in uniform traffic can be improved by up to 11.8%, and the average hop count in neighbor traffic can be improved up to 6.1%.

Daisuke Sasaki, Hao Zhang, Hiroki Matsutani, Michihiro Koibuchi, Hideharu Amano
Semidistributed Virtual Network Mapping Algorithms Based on Minimum Node Stress Priority

Network virtualization has been regarded as a fundamental paradigm that extenuates the ossification of the current network. In a virtualization-enabled networking infrastructure, numbers of diverse virtual networks (VNs) can coexist on a shared physical substrate. In this regard, an important challenge is the allocation of substrate network resources to instantiate multiple VNs. To address this challenge, we propose a semidistributed approach and a balanced VN assigning procedure focusing on balancing the substrate node stress. Besides, numerical experiment results show that the proposed approach has a better performance in terms of node stress and messages exchanges.

Yi Tong, Zhenmin Zhao, Zhaoming Lu, Haijun Zhang, Gang Wang, Xiangming Wen
Scheduling Algorithm Based on Agreement Protocol for Cloud Systems

Task scheduling algorithms have a huge impact by handling and executing users’ requests in a data-center that serves a Cloud System. A problem very close to the industry is the capability to estimate costs, especially when switching from one provider to another. In this paper we introduce an agreement-based scheduling algorithm, aimed to bring an adaptive fault tolerant system. For the agreement protocol we proposed a 3-Tier structure of resources (hosts and VMs). Then an adaptive mechanism for agreement establishment is described. The scheduling algorithm considers workload distribution, resources heterogeneity, transparency, adaptability and also the ease to extend by combining with other scheduling algorithms. Based on simulation experiments, we can draw the conclusion that an agreement based algorithm improves both scheduling in Cloud and the mapping of SLAs at lower levels, possibly ensuring the same cost on data-centers belonging to different providers.

Radu-Ioan Tutueanu, Florin Pop, Mihaela-Andreea Vasile, Valentin Cristea
Parallel Social Influence Model with Levy Flight Pattern Introduced for Large-Graph Mining on Weibo.com

With a suitable method to rank the user influence in micro-blogging service, we could get influential individuals to make information reach large populations. Here a novel parallel social influence model is proposed to face to these challenges. In this paper, we firstly propose impact factors named Social Network Centricity and Weibo Heat Trend, describe a general algorithm named ActionRank to calculate the user influence based on these factors and the user-weibo behavior graph. Secondly, we introduce the Levy flight pattern into ActionRank, for the random large distance jumping phenomenon and the power-law distribution of the retweet cascade hops on Weibo.com meet its requirement. Thirdly, the parallel ActionRank is proposed with the help of MapReduce for large-scale graphs.Experiment results demonstrate that ActionRank on Levy flight pattern outperforms other algorithms and show the consistency of parallel ActionRank on datasets with sizes ranging from 20M to 1100 M edges.

Benbin Wu, Jing Yang, Liang He
Quality Control of Massive Data for Crowdsourcing in Location-Based Services

Crowdsourcing has become a prospective paradigm for commercial purposes in the past decade, since it is based on a simple but powerful concept that virtually anyone has the potential to plug in valuable information, which brings a lot of benefits such as low cost and high immediacy, particularly in some location-based services (LBS). On the other side, there also exist many problems need to be solved in crowdsourcing. For example, the quality control for crowdsourcing systems has been identified as a significant challenge, which includes how to handle massive data more efficiently, how to discriminate poor quality content in workers’ submission and so on. In this paper, we put forward an approach to control the crowdsourcing quality by evaluating workers’ performance according to their submitted contents. Our experiments have demonstrated the effectiveness and efficiency of the approach.

Gang Zhang, Haopeng Chen

International Workshop on Big Data Computing (BDC 2013)

Frontmatter
Towards Automatic Generation of Hardware Classifiers

Nowadays, in a broad range of application areas, the daily data production has reached unprecedented levels. This data origins from multiple sources, such as sensors, social media posts, digital pictures and videos and so on. The technical and scientific issues related to the data booming have been designated as the “Big Data” challenges. To deal with big data analysis, innovative algorithms and data mining tools are needed in order to extract information and discover knowledge from the continuous and increasing data growing. In most of data mining methods the data volume and variety directly impact on computational load. In this paper we illustrate a hardware architecture of the decision tree predictor, a widely adopted machine learning algorithm. In particular we show how it is possible to automatically generate a hardware implementation of the predictor module that provides a better throughput that available software solutions.

Flora Amato, Mario Barbareschi, Valentina Casola, Antonino Mazzeo, Sara Romano
PSIS: Parallel Semantic Indexing System - Preliminary Experiments

In this paper, we address the problem of defining a semantic indexing techniques based on RDF triples. In particular, we define algorithms for: i) defining clustering techniques of semantically similar RDF triplets; ii) defining algorithms for inserting, deleting and searching on a K-d based semantic tree built on the base of such clusterings; iii) defining a parallel implementation of the search algorithms. Preliminary experiments runned on a GRID-based parallel machines are designed and preliminary implemented and discusses, showing the performances of the proposed system.

Flora Amato, Francesco Gargiulo, Vincenzo Moscato, Fabio Persia, Antonio Picariello
Network Traffic Analysis Using Android on a Hybrid Computing Architecture

Nowadays more and more smartphone applications use internet connection, resulting, from the analysis point of view, in complex and huge generated traffic. Due to mobility and resource limitations, the classical approaches to traffic analysis are no more suitable. Furthermore, the most widespread mobile operating systems, such as Android, do not provide facilities for this task. Novel approaches have been presented in the literature, in which traffic analysis is executed in hardware using the Decision Tree classification algorithm. Although they have been proven to be effective in accelerating the classification process, they typically lack an integration with the remaining system. In order to address this issue, we propose a hybrid computing architecture which enables the communication between the Android OS and a traffic analysis hardware accelerator coexisting on the same chip. To this aim, we provide an Android OS porting on a Xilinx Zynq architecture, composed of a dual-core ARM-based processor integrated with FPGA cells, and define a technique to realize the connection with programmable logic components.

Mario Barbareschi, Antonino Mazzeo, Antonino Vespoli
Online Data Analysis of Fetal Growth Curves

Fetal growth curves are considered a critical instrument in prenatal medicine for an appropriate fetal well-being evaluation. Many factors affect fetal growth including physiological and pathological variables, therefore each particular population should have its own reference charts in order to provide the most accurate fetal assessment. In literature a large variety of reference charts are described but they’re up to five decades old and consider hospital-based samples, so they’re not suitable for the current population and furthermore they don’t address the ethnicity, which is an important aspect to take into account. Starting from a detailed analysis of the limitations that characterize the current adopted reference charts, the paper presents a new method, based on multidimensional analysis for creating dynamic and customized fetal growth curves. A preliminary implementation based on open source software, shows why Big Data techniques are essential to solve the problem.

Mario A. Bochicchio, Antonella Longo, Lucia Vaira, Sergio Ramazzina
A Practical Approach for Finding Small {Independent, Distance} Dominating Sets in Large-Scale Graphs

Suppose that in a network, a node can dominate (or cover, monitor, etc) its neighbor nodes. An interesting question asks to find such a minimum set of nodes that dominate all the other nodes. This is known as the

minimum dominating set

problem. A natural generalization assumes that a node can dominate nodes within a distance

R

 ≥ 1, called the minimum

distance

dominating set problem. On the other hand, if the distance between any two nodes in the dominating set must be at least

z

 ≥ 1, then the problem is known as the minimum

independent

dominating set problem. This paper considers to find a minimum distance-

R

independence-

z

dominating set for arbitrary

R

and

z

, which has applications in facility location, internet monitoring and others. We show a practical approach. Empirical studies show that usually it is very fast and quite accurate, thus suitable for Big Data analysis. Generalization to directed graphs, edge lengths, multi-dominating are also discussed.

Liang Zhao, Hiroshi Kadowaki, Dorothea Wagner

International Workshop on Trusted Information in Big Data (TIBiDa 2013)

Frontmatter
Robust Fingerprinting Codes for Database

The purchasing of customer databases, which is becoming more and more common, has led to a big problem: illegal distribution of purchased databases. An essential tool for identifying distributors is database fingerprinting. There are two basic problem in fingerprinting database: designing the fingerprint and embedding it. For the first problem, we have proven that Non-Adaptive Group Testing, which is used to identify specific items in a large population, can be used for fingerprinting and that it is secure against collusion attack efficiently. For the second problem, we have developed a solution that supports up to 262,144 fingerprints for 4,032 attributes, and that is secure against three types of attacks: attribute, collusion and complimentary. Moreover, illegal distributor can be identified within 0.15 seconds.

Thach V. Bui, Binh Q. Nguyen, Thuc D. Nguyen, Noboru Sonehara, Isao Echizen
Heterogeneous Computing vs. Big Data: The Case of Cryptanalytical Applications

This work discusses the key opportunities introduced by Heterogeneous Computing for large-scale processing in the security and cryptography domain. Addressing the cryptanalysis of SHA-1 as a case-study, the paper analyzes and compares three different approaches based on Heterogeneous Computing, namely a hybrid multi-core platform, a computing facility based on a GPU architecture, and a custom hardware-accelerated platform based on reconfigurable devices. The case-study application provides important insights into the potential of the emerging Heterogeneous Computing trends, enabling unprecedented levels of computing power per used resource.

Alessandro Cilardo
Trusted Information and Security in Smart Mobility Scenarios: The Case of S2-Move Project

Smart cities and smart mobility represent two of the most significative real use case scenarios in which there is an increasing demand for collecting, elaborating, and storing large amounts of heterogenous data. In urban and mobility scenarios issues like data trustiness and data and network security are of paramount importance when considering smart mobility services like real-time traffic status, events reporting, fleets management, smart parking, etc. In this architectural paper, we present the main issues related to trustiness and security in the S

2

-Move project in which the contribution is to design and implement a complete architecture for providing soft real-time information exchange among citizens, public administrations and transportation systems. In this work, we first describe the S

2

-Move architecture, all the actors involved in the urban scenario, the communication among devices and the core platform, and a set of mobility services that will be used as a proof of the potentialities of the proposed approach. Then, considering both architecture and the considered mobility services, we discuss the main issues related to trustiness and security we should taken into account in the design of a secure and trusted S

2

-Move architecture.

Pietro Marchetta, Eduard Natale, Alessandro Salvi, Antonio Tirri, Manuela Tufo, Davide De Pasquale
A Linguistic-Based Method for Automatically Extracting Spatial Relations from Large Non-Structured Data

This paper presents a Lexicon-Grammar based method for automatic extraction of spatial relations from Italian non-structured data. We used the software

Nooj

to build sophisticated local grammars and electronic dictionaries associated with the lexicon-grammar classes of the Italian intransitive spatial verbs (i.e. 234 verbal entries) and we applied them to the Italian text Il Codice da Vinci (‘The Da Vinci Code’, by Dan Brown) in order to parse the spatial predicate-arguments structures. In addition, Nooj allowed us to automatically annotate (in XML format) the words (or the sequence of words) that in each sentence (S) of the text play the ‘spatial roles’ of Figure (F), Motion (M) and Ground (G). Finally the results of the experiment and the evaluation of this method will be discussed.

Annibale Elia, Daniela Guglielmo, Alessandro Maisto, Serena Pelosi
IDES Project: A New Effective Tool for Safety and Security in the Environment

In the region of Campania in south-west Italy there is growing evidence, including a World Health Organization (WHO) study of the region, that the accumulation of waste, illegal and legal, urban and industrial, has contaminated soil, water, and the air with a range of toxic pollutants including dioxins. An effective environmental monitoring system represents an important tool for an early detection of the environmental violations. The IDES Project is a Geo-environmental Intelligence System developed by the CIRA with the contribution of universities and other government bodies and it aims at implementing an advanced software and hardware platform for image, data and document analysis in order to support law enforcement investigations. The IDES main modules are:

Imagery Analysis Module

to monitor land-use and anthropogenic changes;

Environmental GIS Module

to fuse geographical and administrative information;

Epidemiological domain Module

;

Semantic Search Module

to discover information in public sources like: Blog, Social Network, Forum, Newspapers; This paper focuses on Semantic Search Module and aims to provide the greatest support to the extraction of possible environmental crimes collecting and analyzing documents from online public sources. Unlikely people denounce criminal activity to the authorities. On the other hand many people through blogs, forums and social networks every day expose the status of land degradation. In addition, journalists often, have given the interest of the public, documenting the critical environmental issues. All this unstructured information are often lost due to the difficulty to collect and analyse. The IDES Semantic Search Module is an innovative solution for aggregating of the common uneasiness and thoughts of the people able to transform and objectify the public opinion in human sensors for safety environmental monitoring. In this paper we introduce methods and technologies used in some case studies and, finally, we report some representatives results, highlighting innovative aspects of this applied research.

Francesco Gargiulo, G. Persechino, M. Lega, A. Errico
Impact of Biometric Data Quality on Rank-Level Fusion Schemes

Recent research has established benefits of rank-level fusion in identification systems; however, these studies have not compared the advantages, if any, of rank-level fusion schemes over classical score-level fusion schemes. In the presence of low quality biometric data, the genuine match score is claimed to be low and expected to be an unreliable individual output. Conversely, the rank assigned to that genuine identity is believed to remain stable even when using low quality biometric data. However, to the best of our knowledge, there is not a deepen investigation on the stability of ranks. In this paper, we analyze changes of the rank assigned to the genuine identity in multi-modal scenarios when using actual low quality data. The performance is evaluated on a subset of the database Face and Ocular Challenge Series (FOCS) collection (the Good, Bad and Ugly database), composed of three frontal faces per subject for 407 subjects. Results show that a variant of the highest rank fusion scheme, which is robust to ties, performs better than the other non-learning based rank-level fusion methods explored in this work. However, experiments demonstrate that score-level fusion results in better identification accuracy than existing rank-level fusion schemes.

Emanuela Marasco, Ayman Abaza, Luca Lugini, Bojan Cukic
A Secure OsiriX Plug-In for Detecting Suspicious Lesions in Breast DCE-MRI

Up-to-date medical image processing is currently based on very sophisticated algorithms that often require a large computational load not always available on conventional workstations. Moreover, algorithms are in continuous evolution and hence clinicians are typically required to update their workstation periodically. The main objective of this paper is to propose a secure and versatile client-server architecture for providing these services at a low cost. In particular, we developed a plug-in allowing OsiriX - a widespread medical image processing application dedicated to DICOM images coming from several equipments - to interact with a system for automatic detection of suspicious lesions in breast DCE-MRI. The large amount of data and the privacy of the information flowing through the network requires a flexible but comprehensive security approach. According to NIST guidelines, in our proposal data are transmitted over SSL/TLS channel after an authentication and authorization procedure based on X.509 standard digital certificate associated with a 3072bit RSA Key Pair. Authentication and authorization procedure is achieved through the services offered by Java JAAS classes.

Gabriele Piantadosi, Stefano Marrone, Mario Sansone, Carlo Sansone
A Patient Centric Approach for Modeling Access Control in EHR Systems

In EHR systems, most of the data are confidential concerning the health of a patient. Therefore, it is necessary to provide a mechanism for access control. This has not only to ensure the confidentiality and integrity of the data, but also to allow the definition of security policies which reflect the need for privacy of the patient who the documents refer to. In this paper we define a new Access Control (AC) model for EHR systems, that allows the patient to define access policies based on her/his need for privacy. Our model starts from the RBAC model, and extends it by adding characteristics and components to manage the access policies in a simple and dynamic manner. It ensures patient privacy, and for this reason we refer to it as a patient-centric AC model.

Angelo Esposito, Mario Sicuranza, Mario Ciampi
A Privacy Preserving Matchmaking Scheme for Multiple Mobile Social Networks

Mobile social networks (MSNs) enable users to discover and interact with existing and potential friends in both the cyberspace and in the real world. Although mobile social network applications bring us much convenience, privacy concerns become the key security issue affecting their wide applications. In this paper, we propose a novel hybrid privacy preserving matchmaking scheme, which can help users to find their friends without disclosing their private information from multiple MSNs. Specifically, a user (called initiator) can find his best-matches among the candidates and exchange common attributes with them. However, other candidates only know the size of the common attributes with the initiator. The simulation results indicate that our scheme has a good performance and scalability.

Yong Wang, Hong-zong Li, Ting-Ting Zhang, Jie Hou
Measuring Trust in Big Data

The huge technological progress we have witnessed in the last decade has enabled us to generate data at an unprecedented rate, leading to what has become the era of big data. However, big data is not just about generating, storing, and retrieving massive amounts of data. The focus should rather be on new analytical approaches that would enable us to extract actionable intelligence from this ocean of data. From a security standpoint, one of the main issues that need to be addressed is the trustworthiness of each source or piece of information. In this paper, we propose an approach to assess and quantify the trust level of both information sources and information items. Our approach leverages the vast literature on citation ranking, and we clearly show the benefits of adapting citation ranking mechanisms to this new domain, both in terms of scalability and in terms of quality of the results.

Massimiliano Albanese

Cloud-assisted Smart Cyber-Physical Systems (C-SmartCPS 2013)

Frontmatter
Agent-Based Decision Support for Smart Market Using Big Data

In goal oriented problems, decision-making is a crucial aspect aiming at enhancing the user’s ability to make decisions. The application of agent-based decision aid in the e-commerce field should help customers to make the right choice giving also to vendors the possibility to predict the purchasing behavior of consumers. The capability of extracting value from data is a relevant issue to evaluate decision criteria, and it is as difficult as volume and velocity of data increase. In this paper agents are enabled to make decisions accessing in the Cloud huge amount of data collected from pervasive devices.

Alba Amato, Beniamino Di Martino, Salvatore Venticinque
Congestion Control for Vehicular Environments by Adjusting IEEE 802.11 Contention Window Size

Medium access control protocols should manage the highly dynamic nature of Vehicular Ad Hoc Networks (VANETs) and the variety of application requirements. Therefore, achieving a well-designed MAC protocol in VANETs is a challenging issue. The contention window is a critical element for handling medium access collisions in IEEE 802.11, and it highly affects the communications performance. This paper proposes a new contention window control scheme, called DBM-ACW, for VANET environments. Analysis and simulation results using OMNeT++ in urban scenarios show that DBM-ACW provides better overall performance compared with previous proposals, even with high network densities.

Ali Balador, Carlos T. Calafate, Juan-Carlos Cano, Pietro Manzoni
QL-MAC: A Q-Learning Based MAC for Wireless Sensor Networks

WSNs are becoming an increasingly attractive technology thanks to the significant benefits they can offer to a wide range of application domains. Extending the system lifetime while preserving good network performance is one of the main challenges in WSNs. In this paper, a novel MAC protocol (QL-MAC) based on Q-Learning is proposed. Thanks to a distributed learning approach, the radio sleep-wakeup schedule is able to adapt to the network traffic load. The simulation results show that QL-MAC provides significant improvements in terms of network lifetime and packet delivery ratio with respect to standard MAC protocols. Moreover, the proposed protocol has a moderate computational complexity so to be suitable for practical deployments in currently available WSNs.

Stefano Galzarano, Antonio Liotta, Giancarlo Fortino
Predicting Battery Depletion of Neighboring Wireless Sensor Nodes

With a view to prolong the duration of the wireless sensor network, many battery lifetime prediction algorithms run on individual nodes. If not properly designed, this approach may be detrimental and even accelerate battery depletion. Herein, we provide a comparative analysis of various machine-learning algorithms to offload the energy-inference task to the most energy-rich nodes, to alleviate the nodes that are entering the critical state. Taken to its extreme, our approach may be used to divert the energy-intensive tasks to a monitoring station, enabling a cloud-based approach to sensor network management. Experiments conducted in a controlled environment with real hardware have shown that RSSI can be used to infer the state of a remote wireless node once it is approaching the cutoff point. The ADWIN algorithm was used for smoothing the input data and for helping a variety of machine learning algorithms particularly to speed up and improve their prediction accuracy.

Roshan Kotian, Georgios Exarchakos, Decebal Constantin Mocanu, Antonio Liotta
TuCSoN on Cloud: An Event-Driven Architecture for Embodied / Disembodied Coordination

The next generation of computational systems is going to mix up pervasive scenarios with cloud computing, with both intelligent and non-intelligent agents working as the reference component abstractions. A uniform set of MAS abstractions expressive enough to deal with both

embodied

and

disembodied

computation is required, in particular when dealing with the complexity of interaction. Along this line, in this paper we define an

event-driven coordination architecture

, along with a coherent

event model

, and test it upon the

TuCSoN

model and technology for MAS coordination.

Stefano Mariani, Andrea Omicini
Integrating Cloud Services in Behaviour Programming for Autonomous Robots

This paper introduces CLEPTA, an extension to the PROFETA robotic programming framework for the integration of

cloud services

in developing the software for

autonomous robots

. CLEPTA provides a set of basic classes, together with a software architecture, which helps the programmer in specifying the invocation of cloud services in the programs handling robot’s behaviour; such a feature allows designers

(i)

to execute computation-intensive algorithms and

(ii)

to include, in robot’s behaviour, additional features made available in the Cloud.

Fabrizio Messina, Giuseppe Pappalardo, Corrado Santoro
RFID Based Real-Time Manufacturing Information Perception and Processing

Timeliness, accuracy and effectiveness of manufacturing information in manufacturing and business process management have become important factors of constraint to business growth. Single RFID (Radio Frequency Identification) technology with uncertainty will cause great difficulties for application systems. This paper mainly focuses on the process of manufacturing information, real-time information perception and processing problems. It achieves real-time manufacturing information acquisition and processing by combining RFID and sensor technology, which uses Complex Event Processing (CEP) mechanism to realize sensor and RFID data fusion. First of all, the event processing framework which is from the perspective of the integration of sensors and RFID is given. Then, real-time acquisition and intelligent information processing models were introduced, including primitive event handling and complex event processing method. Finally, the practicality of our method was verified through applying it to a mold manufacturing enterprise management field.

Wei Song, Wenfeng Li, Xiuwen Fu, Yulian Cao, Lin Yang
Backmatter
Metadaten
Titel
Algorithms and Architectures for Parallel Processing
herausgegeben von
Rocco Aversa
Joanna Kołodziej
Jun Zhang
Flora Amato
Giancarlo Fortino
Copyright-Jahr
2013
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-03889-6
Print ISBN
978-3-319-03888-9
DOI
https://doi.org/10.1007/978-3-319-03889-6

Premium Partner