Skip to main content

2010 | Buch

Computational Science and Its Applications – ICCSA 2010

International Conference, Fukuoka, Japan, March 23-26, 2010, Proceedings, Part II

herausgegeben von: David Taniar, Osvaldo Gervasi, Beniamino Murgante, Eric Pardede, Bernady O. Apduhan

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Workshop on Numerical Methods and Modeling/Simulations in Computational Science and Engineering (NMMS 2010)

The Use of Shadow Regions in Multi Region FDM: High Precision Cylindrically Symmetric Electrostatics

A previously reported multi region FDM process, while producing very accurate solutions to the cylindrically symmetric electrostatic problem for points distant from region boundaries has been found to have abnormally high errors near the region boundaries themselves. A solution to this problem has been found that uses an extension of the definition of a region boundary line to a two line array. The resultant regions are called shadow regions and it is shown that their use in the multi region FDM process reduces the previous errors by ~ two orders of magnitude thus significantly extending the precisional capabilities of the multi region FDM method.

David Edwards Jr.
Fast Forth Power and Its Application in Inversion Computation for a Special Class of Trinomials

This contribution is concerned with an improvement of Itoh and Tsujii’s algorithm for inversion in finite field

GF

(2

m

) using polynomial basis. Unlike the standard version of this algorithm, the proposed algorithm uses forth power and multiplication as main operations. When the field is generated with a special class of irreducible trinomials, an analytical form for fast bit-parallel forth power operation is presented. The proposal can save 1

T

X

compared with the classic approach, where

T

X

is the delay of one 2-input XOR gate. Based on this result, the proposed algorithm for inversion achieves even faster performance, roughly improves the delay by

$\frac{m}{2}T_X$

, at the cost of slight increase in the space complexity compared with the standard version. To the best of our knowledge, this is the first work that proposes the use of forth power in computation of multiplicative inverse using polynomial basis and shows that it can be efficient.

Yin Li, Gong-liang Chen, Jian-hua Li
Computational Study of Compressive Loading of Carbon Nanotubes

A reduced-order general continuum method is used to examine the mechanical behavior of single-walled carbon nanotubes (CNTs) under compressive loading and unloading conditions. Quasi-static solutions are sought where the total energy of the system is minimized with respect to the spatial degree of freedom. We provide detailed buckled configurations for four different types of CNTs and show that, among the cases studied, the armchair CNT has the strongest resistance to the compressive loading. It is also shown that the buckled CNT will significantly lose its structural strength with the zigzag lattice structure. The unloading post-buckling of CNT demonstrates that even after the occurrence of buckling the CNT can still return to its original state making its use desirable in fields such as synthetic biomaterials, electromagnetic devices, or polymer composites.

Yang Yang, William W. Liou
plrint5d: A Five-Dimensional Automatic Cubature Routine Designed for a Multi-core Platform

plrint5d

is an automatic cubature routine for evaluating five-dimensional integrals over a range of domains, including infinite domains. The routine is written in C++ and has been constructed by interfacing a three-dimensional routine with a two-dimensional routine. It incorporates an adaptive error control mechanism for monitoring the tolerance parameter used in calls to the inner routine as well as multi-threading to maximize performance on modern multi-core platforms. Numerical results are presented that demonstrate the applicability of the routine across a wide range of integrand types and the effectiveness of the multi-threading strategy in achieving excellent speed-up.

Tiancheng Li, Ian Robinson
Development of Quadruple Precision Arithmetic Toolbox QuPAT on Scilab

When floating point arithmetic is used in numerical computation, cancellation of significant digits, round-off errors and information loss cannot be avoided. In some cases it becomes necessary to use multiple precision arithmetic; however some operations of this arithmetic are difficult to implement within conventional computing environments. In this paper we consider implementation of a quadruple precision arithmetic environment QuPAT (Quadruple Precision Arithmetic Toolbox) using the interactive numerical software package Scilab as a toolbox. Based on Double-Double (DD) arithmetic, QuPAT uses only a combination of double precision arithmetic operations. QuPAT has three main characteristics: (1) the same operator is used for both double and quadruple precision arithmetic; (2) both double and quadruple precision arithmetic can be used at the same time, and also mixed precision arithmetic is available; (3) QuPAT is independent of which hardware and operating systems are used. Finally we show the effectiveness of QuPAT in the case of analyzing a convergence property of the GCR(

m

) method for a system of linear equations.

Tsubasa Saito, Emiko Ishiwata, Hidehiko Hasegawa
Agent Based Evacuation Model with Car-Following Parameters by Means of Cellular Automata

An agent based evacuation model with car-following parameters for micro traffic by means of cellular automata is proposed. The model features smart drivers who have a concern the distance between the driver and the surrounding cars. Such drivers are called agents. Agents lead the other cars not only in an ordinary case but also in the case of evacuations. We put the smart drivers and the agent drivers into the car-following parameters of traffic model. We applied it in the case of evacuation. Experimental simulation results show the effectiveness of reducing the evacuation time by the agents. It also is found the effectiveness increases in accordance with increasing of the number of agents.

Kohei Arai, Tri Harsono
A Cellular Automata Based Approach for Prediction of Hot Mudflow Disaster Area

A Novel cellular automata’s based approach for prediction of hot mudflow disaster is proposed. A prediction model for hot mudflow based on fluid dynamic is proposed because hot mudflow spread like fluid dynamic with velocity, viscosity and thermal flow parameters. We use much simpler cellular automata’s approach with adding some probabilistic parameter because that is relatively simple and have a good enough performance for visualization of fluid dynamics. We add some new rules to represent hot mudflow movement such as moving rule, precipitation rule, and absorption rule.

The prediction results show high accuracy of elevation changes at the predicted points and its surrounding areas. We compare these predicted results to the digital elevation map derived from ASTER/DEM. Some period maps to evaluate the prediction accuracy of the proposed method.

Kohei Arai, Achmad Basuki
Stochastic Optimization Approaches to Image Reconstruction in Electrical Impedance Tomography

In electrical impedance tomography (EIT), various image reconstruction algorithms have been used in order to compute the internal resistivity distribution of the unknown object with its electric potential data at the boundary. Mathematically the EIT image reconstruction algorithm is a nonlinear ill-posed inverse problem. This paper presents two stochastic optimization techniques such as particle swarm optimization (PSO).and simultaneous perturbation stochastic approximation (SPSA) algorithms for solving the static EIT inverse problem. We summarize the simulation results for the three algorithm forms: modified Newton-Raphson, particle swarm optimization, and simultaneous perturbation stochastic approximation.

Chang-Jin Boo, Ho-Chan Kim, Min-Jae Kang, Kwang Y. Lee
Estimating Soil Parameters Using the Kernel Function

In this paper, a fast algorithm for estimating soil parameters has been presented according to which, after obtaining the kernel function, one can compute the soil parameters of the multilayer earth structure by analyzing the kernel function. The estimated soil parameters using the proposed method are in good agreement with the given earth structure.

Min-Jae Kang, Chang-Jin Boo, Ho-Chan Kim, Jacek M. Zurada
A Genetic Algorithm for Efficient Delivery Vehicle Operation Planning Considering Traffic Conditions

To ensure customer satisfaction, companies must deliver their product safely and within a fixed time. However, it is difficult to determine an inexpensive delivery route when given a number of options. Therefore, an efficient vehicle delivery plan is necessary. Until now, studies of vehicle routes have generally focused on determining the shortest distance. However, vehicle capacity and traffic conditions are also important constraints. We propose using a modified genetic algorithm by considering traffic conditions as the most important constraint to establish an efficient delivery policy for companies. Our algorithm was tested for fourteen problems, and it showed efficient results.

Yoong-Seok Yoo, Jae-Yearn Kim
On the Possibilities of Multi-core Processor Use for Real-Time Forecast of Dangerous Convective Phenomena

We discuss the possibilities of use of the new generation of desktops for solution of one of the most important problems of weather forecasting: real-time prediction of thunderstorms, hails and rain storms. The phenomena are associated with development of intensive convection and are considered as the most dangerous weather conditions. The most perspective way of the phenomena forecast is computer modeling. Small dimensional models (1 - D and 1.5 - D) are the only available to be effectively use in local weather centers and airports for real-time forecasting. We have developed one of such models: 1.5 - D convective cloud model with the detailed description of microphysical processes and have investigated the possibilities of its parallelization on multi-core processors with the different number of cores. The results of the investigations have shown that speed up of cloud evolution calculation can reached the value of 3 if 4 parallelization threads are used.

Nikita Raba, Elena Stankova
Transformation, Reduction and Extrapolation Techniques for Feynman Loop Integrals

We address the computation of Feynman loop integrals, which are required for perturbation calculations in high energy physics, as they contribute corrections to the scattering amplitude for the collision of elementary particles. Results in this field can be used in the verification of theoretical models, compared with data measured at colliders.

We made a numerical computation feasible for various types of one and two-loop Feynman integrals, by parametrizing the integral to be computed and extrapolating to the limit as the parameter introduced in the denominator of the integrand tends to zero. In order to handle additional singularities at the boundaries of the integration domain, the extrapolation can be preceded by a transformation and/or by a sector decomposition. With the goal of demonstrating the applicability of the combined integration and extrapolation methods to a wide range of problems, we give a survey of earlier work and present additional applications with new results. We aim for an automatic or semi-automatic approach, in order to greatly reduce the amount of analytic manipulation required before the numeric approximation.

Elise de Doncker, Junpei Fujimoto, Nobuyuki Hamaguchi, Tadashi Ishikawa, Yoshimasa Kurihara, Yoshimitsu Shimizu, Fukuko Yuasa

Workshop on PULSES V- Logical, Scientific and Computational Aspects of Pulse Phenomena in Transitions (PULSES 2010)

Application of Wavelet-Basis for Solution of the Fredholm Type Integral Equations

The paper deals with the application of periodic wavelts as basis functions for solution of the Fredholm type integral equations. We examine a special case for a degenerate kernel and show multiscale solution of an integral equation for a non-degenerate kernel. The benefits of the application of periodic harmonic wavelets are discussed. The approximation error of projection of solution on the space of periodized wavelets is analytically estimated.

Carlo Cattani, Aleksey Kudreyko
Fractal Patterns in Prime Numbers Distribution

One of the main tasks in the analysis of prime numbers distribution is to single out hidden rules and regular features like periodicity, typical patterns, trends, etc. The existence of fractal shapes, patterns and symmetries in prime numbers distribution are discussed.

Carlo Cattani
Efficient Energy Supply from Ground Coupled Heat Transfer Source

The increasing demands of Energy for industrial production and urban facilities, asks for new strategies for Energy sources. In recent years an important problem is to have some energy storage, energy production and energy consumption which fulfill some environment friendly expectations. Much more attention has been recently devoted to renewable energies [1]. Among them energy production from geothermal sources has becoming one of the most attracting topics for Engineering applications. Ground coupled heat transfer might give an efficient energy supplies for well-built construction. At a few meters below the earth’s surface the underground maintains a constant temperature in a approximation through the year allowing to withdraw heat in winter to warm up the habitat and to surrender heat during summer to refresh it. Exploiting this principle, heat exchange is carried out with heat pumps coupled with vertical ground heat exchanger tubes that allows the heating and refreshing of the buildings utilising a single plant installation. This procedure ensure a high degree of productivity, with a moderate electric power requirement compared to performances. In geographical area characterize by specific geological conformations such as the Viterbo area which comprehend active volcanic basins, it is difficult to use conventional geothermal plants. In fact the area presents at shallow depths thermal falde ground water with temperatures that varies from 40 to 60

o

C geothermal heat pumps cannot be utilized [2]. In these area the thermal aquifer can be exploited directly as hot source using vertical heat exchanger steel tubes without altering the natural balance of the basin. Through the heat exchange that occurs between the water in the wells and the fluid that circulates inside the heat exchanger, you can take the heat necessary to meet the needs. The target of the project is to analyze in detail the plant for the exchange of heat with the thermal basin, defining the technical-scientific elements and verifying the exploitation of heat in the building-trade for housing and agricultural fields.

Maurizio Carlini, Sonia Castellucci
A Study of Nonlinear Time–Varying Spectral Analysis Based on HHT, MODWPT and Multitaper Time–Frequency Reassignment

Numerous approaches have been explored to improve the performance of time–frequency analysis and to provide a sufficiently clear time–frequency representation. Among them, three methods such as the empirical mode decomposition (EMD) with Hilbert transform (HT) (or termed as the Hilbert–Huang Transform (HHT)), along with the Hilbert spectrum based on maximal overlap discrete wavelet package transform (MODWPT) and the multitaper time–frequency reassignment raised by Xiao and Flandrin, are noteworthy. This study evaluates the performances of three transforms mentioned above, in estimating single and multicomponent chip signals in the presence of noise or noise–free. Rényi Enropy is implemented for measuring the effectiveness of each algorithm. The paper demonstrates that under these conditions MODWPT owes better time–frequency resolution and statistical stability than the HHT. The multitaper time–frequency reassigned spectrogram makes excellent trade–off between time–frequency localization and local stationarity.

Pei-Wei Shan, Ming Li
Performance Analysis of Greenhouses with Integrated Photovoltaic Modules

Thanks to the DM 19.02.2007, Italian government supported the development and the expansion of solar photovoltaic in Italy. The feed-in tariff had a great success, and like in Spain and Germany big size photovoltaic plants have been built, especially in the south of the country. The south of Italy presents high irradiation (up to 1.700 equivalent hours) and economically agriculture is an important local resource. This aspect led to the concept of the solar greenhouses, a way to match the electricity production by PV modules with an improvement of the cultivation possibilities. Solar greenhouses includes integrated PV modules mounted on the roof oriented to south and his design is new and still has to be evaluated in details. In particular important parameters like the luminance, the type of cultivations and the temperature of the PV modules must be carefully analyzed to have a real good match between the agriculture and the electricity production purpose. In the paper TRNSYS 16 has been used for the simulation of temperatures and humidity in the structure. The simulation had the goal to define the performance of the solar greenhouse during the year, with the possibility to individuate important construction parameters for the realization of a greenhouse efficient from all point of views.

Maurizio Carlini, Mauro Villarini, Stefano Esposto, Milena Bernardi
Modelling Cutaneous Senescence Process

During the last years, skin aging has become an area of increasing research interest. High frequency ultrasound allows the “in vivo” appreciation of certain histological parameters and offers new characteristic markers, which may quantify the severity of the cutaneous senescence process. This paper focuses on measuring the changes in skin thickness and dermis echogenicity , as part of the complex ageing process, on different intervals of age. In particular by using a multiscale approach we will compute some parameters which are connected with complexity (fractal structure) of skin ageing.

Maria Crisan, Carlo Cattani, Radu Badea, Paulina Mitrea, Mira Florea, Diana Crisan, Delia Mitrea, Razvan Bucur, Gabriela Checiches
Self-similar Hierarchical Regular Lattices

This paper deals with the topological-metric structure of a network made by a family of self-similar hierarchical regular lattices. We derive the basic properties and give a suitable definition of self-similarity on lattices. This concept of self-similarity is shown on some classical (omothety) and more recent models (Sierpinski tesselations and Husimi cacti). Both the metric and the geometric properties of the lattice will be intrinsically defined.

Carlo Cattani, Ettore Laserra

Workshop on Software Engineering Processes and Applications (SEPA 2010)

Model Driven Software Development of Applications Based on Web Services

One of the main success factors of the business IT infrastructure is its capacity to face the change. Many companies are defining its IT infrastructure based on Service-Oriented Architecture (SOA), which promises flexibility and efficiency to face the change by reusing and composing loosely coupled services. Because the actual technological platforms used to build SOA systems were not defined originally to this kind of systems, the majority of existing tools for service composition demands that the programmer knows a lot of technical details for its implementation. In this article we propose a conceptual modeling solution to both problems based on the Model-Driven Architecture. Our solution proposes the specification of services and its reuse in terms of platform independent conceptual models. These models are then transformed into platform specific models by a set of Model-to-Model transformation rules, and finally the source code is generated by a set of Model-to-Text transformation rules. Our proposal has been implemented with a tool implemented using the Eclipse Modeling Framework using QVT and Mofscript model transformation languages.

Ricardo Rafael Quintero Meza, Leopoldo Zenaido Zepeda Sánchez, Liliana Vega Zazueta
An Aspect-Oriented Approach for Mobile Embedded Software Modeling

Recently, it is one of the most challenging fields in software engineering for embedded software development, since the advancement of embedded technologies has made our life increasingly depend on embedded systems and increased the size and complexity of embedded software. Embedded software developers must pay attention to not only performance and size but also extensibility and modifiability with a view to the complexity rising of embedded software. Besides, one of the characteristics of mobile embedded software is that they are context dependence with crosscutting concerns. Therefore, how to provide a systematic approach to modeling the mobile embedded software, especially on the crosscutting between the sensor, context and reactive behavior, has become an emerging issue in present researches. In this paper, we propose an aspect-oriented modeling process and notations extended from UML for mobile embedded software modeling to deal with the context dependence among sensors and their corresponding reactive functionalities. For the aspect oriented modeling process, the aspects modeling process is provided to separate the concerns of the mobile embedded software. Meanwhile, the extended notations with meta-model framework under class diagram, sequence diagram, and state machine diagram are depicted to facilitate the aspects modeling on structural and behavioral perspectives, respectively. Moreover, a Female Anti-Robbery System is used as an illustrative example to demonstrate our proposed approach.

Yong-Yi FanJiang, Jong-Yih Kuo, Shang-Pin Ma, Wong-Rong Huang
Multi-variate Principal Component Analysis of Software Maintenance Effort Drivers

The global IT industry has already attained maturity and the number of software systems entering into the maintenance stage is steadily increasing. Further, the industry is also facing a definite shift from traditional environment of legacy softwares to newer softwares. Software maintenance (SM) effort estimation has become one of the most challenging tasks owing to the wide variety of projects and dynamics of the SM environment. Thus the real challenge lies in understanding the role of a large number of SM effort drivers. This work presents a multi-variate analysis of the effect of various drivers on maintenance effort using the Principal Component Analysis (PCA) approach. PCA allows reduction of data into a smaller number of components and its alternate interpretation by analysing the data covariance. The analysis is based on an available real life dataset of 14 drivers influencing the effort of 36 SM projects, as estimated by 6 experts.

Ruchi Shukla, A. K. Misra
Process Model Based Incremental Project Planning

It is widely accepted that process models can significantly increase the likelihood of a project to finish successfully. A very basic aspect of actually using a process model is to derive a project plan thereof and keep it up to date. However, this is a tedious task – especially when each document might undergo numerous revisions. Thus, automation is needed. However, current approaches based on workflows or change management systems do not provide incremental update mechanisms: process engineers have to define them by themselves – especially when they develop an organization specific process model. On the other hand, incremental model transformations, known from the model driven development domain, are to low-level to be of practical use. In fact, proper high-level model transformation languages are yet subject to research. In this paper we present a process language which integrates both: process modeling languages and incremental model transformations.

Edward Fischer
A Software Metric for Python Language

There are many metrics for evaluating the quality of codes written in different programming languages. However, no efforts have been done to propose metrics for Python, which is an important and useful language especially for the software development for the embedded systems. In this present work, we are trying to investigate all the factors, which are responsible for increasing the complexity of code written in Python language. Accordingly, we have proposed a unified metric for this language. Practical applicability of the metric is demonstrated on a case study.

Sanjay Misra, Ferid Cafer
Certifying Algorithms for the Path Cover and Related Problems on Interval Graphs

A certifying algorithm for a problem is an algorithm that provides a certificate with each answer that it produces. The certificate is a piece of evidence that proves the answer has no compromised by a bug in the implementation. A Hamiltonian cycle in a graph is a simple cycle in which each vertex of the graph appears exactly once. The Hamiltonian cycle problem is to test whether a graph has a Hamiltonian cycle. A path cover of a graph is a family of vertex-disjoint paths that covers all vertices of the graph. The path cover problem is to find a path cover of a graph with minimum cardinality. The scattering number of a noncomplete connected graph

G

 = (

V

,

E

) is defined by

s

(

G

) =  max {

ω

(

G

 − 

S

) − |

S

|:

S

 ⊆ 

V

and

$\omega(G-S)\geqslant 1\}$

, in which

ω

(

G

 − 

S

) denotes the number of components of the graph

G

 − 

S

. The scattering number problem is to determine the scattering number of a graph. A recognition problem of graphs is to decide whether a given input graph has a certain property. To the best of our knowledge, most published certifying algorithms are to solve the recognition problems for special classes of graphs. This paper presents

O

(

n

)-time certifying algorithms for the above three problems, including Hamiltonian cycle problem, path cover problem, and scattering number problem, on interval graphs given a set of

n

intervals with endpoints sorted. The certificates provided by our algorithms can be authenticated in

O

(

n

) time.

Ruo-Wei Hung, Maw-Shang Chang
Design and Overhead Estimation of Device Driver Process

Conventional operating systems used to have device drivers as kernel modules or embedded objects. Therefore, maturity of a device driver influences on the reliability of the entire system. There is a method for constructing device driver as an user process for improving the reliability. Device driver process enhances the reliability of the operating system. However, device driver process has large overhead. In this paper, we propose a method for constructing device drivers process and evaluating these overhead. Also, this paper shows that the overhead of device driver process can be estimated.

Yusuke Nomura, Kouta Okamoto, Yusuke Gotoh, Yoshinari Nomura, Hideo Taniguchi, Kazutoshi Yokoyama, Katsumi Maruyama
Improvement of Gaze Estimation Robustness Using Pupil Knowledge

This paper presents an eye gaze estimation system which robust against various users. Our method utilizes an IR camera mounted on glass to allow user’s movement. Pupil knowledge such as shape, size, location, and motion are used. This knowledge works based on the knowledge priority. Pupil appearance such as size, color, and shape are used as the first priority. When this step fails, then pupil is estimated based on its location as second priority. When all steps fail, then we estimate pupil based on its motion as the last priority. The aim of this proposed method is to make the system compatible for various user as well as to overcome problem associated with illumination changes and user movement. The proposed system is tested using several users with various race as well as nationality and the experiment result are compared to the well-known adaptive threshold method and template matching method. The proposed method shows good performance, robustness, accuracy and stability against illumination changes without any prior calibration.

Kohei Arai, Ronny Mardiyanto

Workshop on WEB 2.0 and Social Networks (Web2.0 2010)

Behaviour-Based Web Spambot Detection by Utilising Action Time and Action Frequency

Web spam is an escalating problem that wastes valuable resources, misleads people and can manipulate search engines in achieving undeserved search rankings to promote spam content. Spammers have extensively used Web robots to distribute spam content within Web 2.0 platforms. We referred to these web robots as spambots that are capable of performing human tasks such as registering user accounts as well as browsing and posting content. Conventional content-based and link-based techniques are not effective in detecting and preventing web spambots as their focus is on spam content identification rather than spambot detection. We extend our previous research by proposing two action-based features sets known as

action time

and

action frequency

for spambot detection. We evaluate our new framework against a real dataset containing spambots and human users and achieve an average classification accuracy of 94.70%.

Pedram Hayati, Kevin Chai, Vidyasagar Potdar, Alex Talevski
Towards the Definition of a Metamodel for the Conceptual Specification of Web Applications Based on Social Networks

The present work is done within the framework of the Model Driven Development of Software. It proposes an initial strategy for obtaining a metamodel that captures the main elements (objects, actors, activities, subjects, relations, etc.) that characterize the Web applications of Social Networks. It also includes the definition of a tool that allows the graphical edition of models for the mentioned applications, considering as the base for capturing the requirements of the main elements of the Social Network application. With these models, a general automatic code generation strategy for a Web 2.0 application is presented.

Constanza Hernández, Ricardo Quintero, Leopoldo Z. Sánchez
Semi-automatic Information Extraction from Discussion Boards with Applications for Anti-Spam Technology

Forums (or discussion boards) represent a huge information collection structured under different boards, threads and posts. The actual information entity of a forum is a post, which has the information about authors, date and time of post, actual content etc. This information is significant for a number of applications like gathering market intelligence, analyzing customer perceptions etc. However automatically extracting this information from a forum is an extremely challenging task. There are several customized parsers designed for extracting information from a particular forum platform with a specific template (e.g. SMF or phpBB), however the problem with this approach is that these parsers are dependent upon the forum platform and the template used, which makes it unrealistic to use in practical situations. Hence, in this paper we propose a semi-automatic rule based solution for extracting forum post information and inserting the extracted information to a database for the purpose of analysis. The key challenge with this solution is identifying extraction rules, which are normally forum platform and forum template specific. As a result we analyzed 72 forums to derive these rules and test the performance of the algorithm. The results indicate that we were able to extract all the required information from SMF and phpBB forum platforms, which represent the majority of forums on the web.

Saeed Sarencheh, Vidyasagar Potdar, Elham Afsari Yeganeh, Nazanin Firoozeh
Factors Involved in Estimating Cost of Email Spam

This paper analyses existing research work to identify all possible factors involved in estimating cost of spam. Main motivation of this paper is to provide unbiased spam costs estimation. For that, we first study the email spam lifecycle and identify all possible stakeholders. We then categorise cost and study the impact on each stakeholder. This initial study will form the backbone of the real time spam cost calculating engine that we are developing for Australia.

Farida Ridzuan, Vidyasagar Potdar, Alex Talevski
Spam 2.0: The Problem Ahead

Webspam is one of the most challenging problems faced by major search engines in the social computing arena. Spammers exploit weaknesses of major search engine algorithms to get their website in the top 10 search results, which results in higher traffic and increased revenue. The development of web applications where users can contribute content has also increased spam, since many web applications like blogging tools, CMS etc are vulnerable to spam. Spammers have developed targeted bots that can create accounts on such applications, add content and even leave comments automatically. In this paper we introduce the field of webspam, what it refers to, how spambots are designed and propagated, why webspam is becoming a big problem. We then experiment to show how spambots can be identified without using CAPTCHA. We aim to increase the general understanding of the webspam problem which will assist web developers, software engineers and web engineers.

Vidyasagar Potdar, Farida Ridzuan, Pedram Hayati, Alex Talevski, Elham Afsari Yeganeh, Nazanin Firuzeh, Saeed Sarencheh

General Track on Information Systems and Technologies

Tag Recommendation for Flickr Using Web Browsing Behavior

It is fun to share photos with other people easily and effectively. For it, a tag recommendation system for Flickr is developed in this paper. We assume that there are some relations between a photo of which a user attempts to update tags and webpages that the user has browsed. In our system, Web browsing behavior of a user is exploited to suggest not only candidate tags to be added to but also candidate tags to be deleted from a photo in Flickr to the user. We discuss how to implement the system in this paper. We also report some experimental results to show the effectiveness of our system.

Taiki Takashita, Tsuyoshi Itokawa, Teruaki Kitasuka, Masayoshi Aritsugi
USABAGILE_Web: A Web Agile Usability Approach for Web Site Design

The present research presents an agile methodology designed for the Web, to design or to re-engineer a software product by analyzing the product’s architecture, developing new user interface’s prototypes, and testing them using detailed usability criteria. The proposed methodology represents an innovative approach to provide to the end-users a product easily comprehensive and easy to use, based on modern user-machine interfaces. For this reason the approach represents an important achievement towards the comprehension of the inner mechanisms on which is based the human-computer interaction.

Gladys Benigni, Osvaldo Gervasi, Francesco Luca Passeri, Tai-Hoon Kim
Combining CSP and Constraint-Based Mining for Pattern Discovery

A well-known limitation of a lot of data mining methods is the huge number of patterns which are discovered: these large outputs hamper the individual and global analysis performed by the end-users of data. That is why discovering patterns of higher level is an active research field. In this paper, we investigate the relationship between local constraint-based mining and constraint satisfaction problems and we propose an approach to model and mine patterns combining several local patterns, i.e., patterns defined by n-ary constraints. The user specifies a set of n-ary constraints and a constraint solver generates the whole set of solutions. Our approach takes benefit from the recent progress on mining local patterns by pushing with a solver on local patterns all local constraints which can be inferred from the n-ary ones. This approach enables us to model in a flexible way

any

set of constraints combining several local patterns. Experiments show the feasibility of our approach.

Mehdi Khiari, Patrice Boizumault, Bruno Crémilleux
Experience in Developing an Open Source Scalable Software Infrastructure in Japan

The Scalable Software Infrastructure for Scientific Computing (SSI) Project was initiated in November 2002, as a five year national project in Japan, for the purpose of constructing a scalable software infrastructure to replace the existing implementations of parallel algorithms in individual scientific fields. The project covered the following four areas: iterative solvers for linear systems, fast integral transforms, their effective implementation for high performance computers of various types, and joint studies with institutes and computer vendors, in order to evaluate the developed libraries for advanced computing environments.

An object-oriented programming model was adopted to enable users to write their parallel codes by just combining elementary mathematical operations. Implemented algorithms are selected from the viewpoint of scalability on massively parallel computing environments. The libraries are freely available via the Internet, and intended to be improved by the feedback from users. Since the first announcement in September 2005, the codes have been downloaded and evaluated by thousands of users at more than 140 organizations around the world.

Akira Nishida
A New Formal Test Method for Networked Software Integration Testing

This paper considers the integration testing for networked software that is built by assembling several distributed components in an interoperable manner. Using the traditional single automata-based test approaches, we suffer from the state combinatorial explosion problem. Moreover, several generated test cases may not be executable. This paper proposed a test method based on the automata net which is the extension of communication automata. The state/transition path (S/T-Path) is defined to describe the execution of the software under test. The test cases are constructed through combining the atomic S/T-Paths and all executable. The test cases are calculated from the local transition structures and the interaction procedure between components, so the state combinatorial explosion problem will not be encountered. The generation of test cases for certain software and the benefits for the problems are discussed. Results show that our method has better properties.

Shuai Wang, Yindong Ji, Wei Dong, Shiyuan Yang

General Track on Computational Methods, Algorithms and Scientific Application

Automatic History Matching in Petroleum Reservoirs Using the TSVD Method

History matching is an important inverse problem extensively used to estimate petrophysical properties of an oil reservoir by matching a numerical simulation to the reservoir’s history of oil production. In this work, we present a method for the resolution of a history matching problem that aims to estimate the permeability field of a reservoir using the pressure and the flow rate observed in the wells. The reservoir simulation is based on a two-phase incompressible flow model. The method combines the truncated singular value decomposition (TSVD) and the Gauss-Newton algorithms. The number of parameters to estimate depends on how many gridblocks are used to discretize the reservoir. In general, this number is large and the inverse problem is ill-posed. The TSVD method regularizes the problem and decreases considerably the computational effort necessary to solve it. To compute the TSVD we used the Lanczos method combined with numerical implementations of the derivative and of the adjoint formulation of the problem.

Elisa Portes dos Santos Amorim, Paulo Goldfeld, Flavio Dickstein, Rodrigo Weber dos Santos, Carolina Ribeiro Xavier
An Efficient Hardware Architecture from C Program with Memory Access to Hardware

To improve the performance and power-consumption of the system-on-chip (SoC), the software processes are often converted to the hardware. However, to extract the performance of the hardware as much as possible, the memory access must be improved. In addition, the development period of the hardware has to be reduced because the life-cycle of SoC is commonly short. This paper proposes a design-level hardware architecture (semi-programmable hardware: SPHW) which is inserted onto the pass from C to hardware. On the SPHW, the memory accesses and buffers are realized by the software programming and parameters respectively. By using the SPHW you can easily develop the data processing hardware containing the efficient memory access controller at C-level abstraction. Compared with the conventional cases, the SPHW can reduce the development time significantly. The experimental result also shows that you can employ the SPHW as the final product if the memory access latency is hidden enough.

Akira Yamawaki, Seiichi Serikawa, Masahiko Iwane
A New Approach: Component-Based Multi-physics Coupling through CCA-LISI

A new problem in scientific computing is the merging of existing simulation models to create new, higher fidelity combined models. This has been a driving force in climate modeling for nearly a decade now, and fusion energy, space weather modeling are starting to integrate different sub-physics into a single model. Through component-based software engineering, an interface supporting this coupling process provides a way to invoke the sub-model through the common interface which the top model uses, then a coupled model turns into a higher level model. In addition to allowing applications to switch among linear solvers, a linear solver interface is also needed for the model coupling. A linear solver interface helps in creating solvers for the integrated multi-physics simulation that combines separate codes, and can use each code’s native and specialized solver for the sub-problem corresponding to each physics sub-model. This paper presents a new approach on coupling multi-physics codes in terms of coupled solver, and shows the successful proof for coupled simulation through the implicit solve.

Fang Liu, Masha Sosonkina, Randall Bramley
Efficient Algorithms for the 2-Center Problems

This paper achieves

O

(

n

3

loglog

n

/log

n

) time for the 2-center problems on a directed graph with non-negative edge costs under the conventional RAM model where only arithmetic operations, branching operations, and random accessibility with

O

(log

n

) bits are allowed. Here

n

is the number of vertices. This is a slight improvement on the best known complexity of those problems, which is

O

(

n

3

). We further show that when the graph is with unit edge costs, one of the 2-center problems can be solved in

O

(

n

2.575

) time.

Tadao Takaoka
A Genetic Algorithm for Integration Planning of Assembly Operation and Machine Layout

This study is intended to examine the issue regarding the integration of assembly operations and machine layouts. For this issue, we consider a scenario of multiple orders and therefore build up an optimized mathematical model of synchronous planning. In the study, we develop an optimization mathematical model that uses a genetic algorithm (GA) for finding solutions and identify the most proper parameter value that best fit the GA by means of the experimental design. Ultimately, we use a case for a methodological application and the result shows that the GA may effectively integrate assembly operations and machine layouts for finding solutions.

C. J. Chiang, Z. H. Che, Tzu-An Chiang, Zhen-Guo Che

General Track on Advanced and Emerging Applications

Diagnosis of Learning Styles Based on Active/Reflective Dimension of Felder and Silverman’s Learning Style Model in a Learning Management System

Learner centered education is important both in point of face to face and Web based learning. Due to this importance, diagnosis of learning styles of students in web based or web enhanced educational settings is important as well. This paper presents prediction of learning styles by means of monitoring learner interface interactions. A mathematics course executed on a learning management system (Moodle) was monitored and learning styles of the learners were analyzed in point of active/reflective dimension of Felder and Silverman Learning Styles Model. The data from learner actions were analyzed through literature based automatic student modeling. The results from Index of Learning Styles and predicted learning styles were compared. For active/reflective dimension 79.6% precision was achieved.

Ömer Şimşek, Nilüfer Atman, Mustafa Murat İnceoğlu, Yüksel Deniz Arikan
Exploring the State of the Art in Adaptive Distributed Learning Environments

The use of one-size-fits-all approach is getting replaced by the adaptive, personalized perspective in recently developed learning environments. This study takes a look at the need of personalization in e-learning systems and the adaptivity and distribution features of adaptive distributed learning environments. By focusing on how personalization can be achieved in e-learning systems, the technologies used for establishing adaptive learning environments are explained and evaluated briefly. Some of these technologies are web services, multi-agent systems, semantic web and AI techniques such as case-based reasoning, neural networks and Bayesian networks used in intelligent tutoring systems. Finally, by discussing some of the adaptive distributed learning systems, an overall state of the art of the field is given with some future trends.

Birol Ciloglugil, Mustafa Murat Inceoglu
Analysis of an Implicitly Restarted Simpler GMRES Variant of Augmented GMRES

We analyze a Simpler GMRES variant of augmented GMRES with implicit restarting for solving nonsymmetric linear systems with small eigenvalues. The use of a shifted Arnoldi process in the Simpler GMRES variant for computing Arnoldi basis vectors has the advantage of not requiring an upper Hessenberg factorization and this often leads to cheaper implementations. However the use of a non-orthogonal basis has been identified as a potential weakness of the Simpler GMRES algorithm. Augmented variants of GMRES also employ non-orthogonal basis vectors since approximate eigenvectors are added to the Arnoldi basis vectors at the end of a cycle and in case the approximate eigenvectors are ill-conditioned, this may have an adverse effect on the accuracy of the computed solution. This problem is the focus of our paper where we analyze the shifted Arnoldi implementation of augmented GMRES with implicit restarting and compare its performance and accuracy with that based on the Arnoldi process. We show that augmented Simpler GMRES with implicit restarting involves a transformation matrix which leads to an efficient implementation and we theoretically show that our implementation generates the same subspace as the corresponding GMRES variant. We describe various numerical tests that indicate that in cases where both variants are successful, our method based on Simpler GMRES keeps comparable accuracy as the augmented GMRES variant. Also, the Simpler GMRES variants perform better in terms of computational time required.

Ravindra Boojhawon, Desire Yannick Tangman, Kumar Dookhitram, Muddun Bhuruth
On the Linearity of Cryptographic Sequence Generators

In this paper we show that the output sequences of the generalized self-shrinking generator are particular solutions of a binary homogeneous linear difference equation. In fact, all these sequences are just linear combinations of primary sequences weighted by binary coefficients. We show that in addition to the output sequences of the generalized self-shrinking generator, the complete class of solutions of the corresponding binary homogeneous linear difference equation also includes other balanced sequences that are very suitable for cryptographic applications, as they have the same period and even greater linear complexity than the generalized self-shrinking sequences. Cryptographic parameters of all the above mentioned sequences can be analyzed in terms of linear equation solutions.

Amparo Fuster-Sabater, Oscar Delgado-Mohatar, Ljiljana Brankovic
Backmatter
Metadaten
Titel
Computational Science and Its Applications – ICCSA 2010
herausgegeben von
David Taniar
Osvaldo Gervasi
Beniamino Murgante
Eric Pardede
Bernady O. Apduhan
Copyright-Jahr
2010
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-12165-4
Print ISBN
978-3-642-12164-7
DOI
https://doi.org/10.1007/978-3-642-12165-4

Premium Partner