Skip to main content
main-content

Über dieses Buch

The present stage of the human civilization is the e-society, which is build over the achievements obtained by the development of the information and communication technologies. It affects everyone, from ordinary mobile phone users to designers of high quality industrial products, and every human activity, from taking medical care to improving the state governing. The science community working in computer sciences and informatics is therefore under constant challenge; it has to solve the new appeared theoretical problem as well as to find new practical solutions.

The fourth ICT Innovations Conference, held in September 2012 in Ohrid, Macedonia, was one of the several world-wide forums where academics, professionals and practitioners presented their last scientific results and development applications in the fields of high performance and parallel computing, bioinformatics, human computer interaction, security and cryptography, computer and mobile networks, neural networks, cloud computing, process verification, improving medical care, improving quality of services, web technologies, hardware implementations, cultural implication. In this book the best 37 ranked articles are presented.

Inhaltsverzeichnis

Frontmatter

Controlling Robots Using EEG Signals, Since 1988

The paper considers the emergence of the field of controlling robots using EEG signals. It looks back to the first result in the field, achieved in 1988. From a viewpoint of EEG driven control, it was the first result in controlling a physical object using EEG signals. The paper gives details of the development of the research infrastructure which enabled such a result, including description of the lab setup and algorithms. The paper also gives a description of the scientific context in which the result was achieved by giving a short overview of the first ten papers in the field of EEG driven control.

Stevo Bozinovski

Hybrid 2D/1D Blocking as Optimal Matrix-Matrix Multiplication

Multiplication of huge matrices generates more cache misses than smaller matrices. 2D block decomposition of matrices that can be placed in L1 CPU cache decreases the cache misses since the operations will access data only stored in L1 cache. However, it also requires additional reads, writes, and operations compared to 1D partitioning, since the blocks are read multiple times.

In this paper we propose a new hybrid 2D/1D partitioning to exploit the advantages of both approaches. The idea is first to partition the matrices in 2D blocks and then to multiply each block with 1D partitioning to achieve minimum cache misses. We select also a block size to fit in L1 cache as 2D block decomposition, but we use rectangle instead of squared blocks in order to minimize the operations but also cache associativity. The experiments show that our proposed algorithm outperforms the 2D blocking algorithm for huge matrices on AMD Phenom CPU.

Marjan Gusev, Sasko Ristov, Goran Velkoski

P2P Assisted Streaming for Low Popularity VoD Contents

The Video on Demand (VoD) service is becoming a dominant service in the telecommunication market due to the great convenience regarding the choice of content items and their independent viewing time. However, due to its high traffic demand nature, the VoD streaming systems are faced with the problem of huge amounts of traffic generated in the core of the network, especially for serving the requests for content items that are not in the top popularity range. Therefore, we propose a peer assisted VoD model that takes advantage of the clients unused uplink and storage capacity to serve requests for less popular items with the objective to keep the traffic on the periphery of the network, reduce the transport cost in the core of the network and make the system more scalable.

Sasho Gramatikov, Fernando Jaureguizar, Igor Mishkovski, Julián Cabrera, Narciso García

Cross-Language Acoustic Modeling for Macedonian Speech Technology Applications

This paper presents a cross-language development method for speech recognition and synthesis applications for Macedonian language. Unified system for speech recognition and synthesis trained on German language data was used for acoustic model bootstrapping and adaptation. Both knowledge-based and data-driven approaches for source and target language phoneme mapping were used for initial transcription and labeling of small amount of recorded speech. The recognition experiments on the source language acoustic model with target language dataset showed significant recognition performance degradation. Acceptable performance was achieved after Maximum a posteriori (MAP) model adaptation with limited amount of target language data, allowing suitable use for small to medium vocabulary speech recognition applications. The same unified system was used again to train new separate acoustic model for HMM based synthesis. Qualitative analysis showed, despite the low quality of the available recordings and sub-optimal phoneme mapping, that HMM synthesis produces perceptually good and intelligible synthetic speech.

Ivan Kraljevski, Guntram Strecha, Matthias Wolff, Oliver Jokisch, Slavcho Chungurski, Rüdiger Hoffmann

Efficient Classification of Long Time-Series

Time-series classification has gained wide attention within the Machine Learning community, due to its large range of applicability varying from medical diagnosis, financial markets, up to shape and trajectory classification. The current state-of-art methods applied in time-series classification rely on detecting similar instances through neighboring algorithms. Dynamic Time Warping (DTW) is a similarity measure that can identify the similarity of two time-series, through the computation of the optimal warping alignment of time point pairs, therefore DTW is immune towards patterns shifted in time or distorted in size/shape. Unfortunately the classification time complexity of computing the DTW distance of two series is quadratic, subsequently DTW based nearest neighbor classification deteriorates to quartic order of time complexity per test set. The high time complexity order causes the classification of long time series to be practically infeasible. In this study we propose a fast linear classification complexity method. Our method projects the original data to a reduced latent dimensionality using matrix factorization, while the factorization is learned efficiently via stochastic gradient descent with fast convergence rates and early stopping. The latent data dimensionality is set to be as low as the cardinality of the label variable. Finally, Support Vector Machines with polynomial kernels are applied to classify the reduced dimensionality data. Experimentations over long time series datasets from the UCR collection demonstrate the superiority of our method, which is orders of magnitude faster than baselines while being superior even in terms of classification accuracy.

Josif Grabocka, Erind Bedalli, Lars Schmidt-Thieme

Towards a Secure Multivariate Identity-Based Encryption

We investigate the possibilities of building a Multivariate Identity-Based Encryption (IBE) Scheme, such that for each identity the obtained Public Key Encryption Scheme is Multivariate Quadratic (MQ). The biggest problem in creating an IBE with classical MQ properties is the possibility of collusion of polynomial number of users against the master key or the keys of other users. We present a solution that makes the collusion of polynomial number of users computationally infeasible, although still possible. The proposed solution is a general model for a Multivariate IBE Scheme with exponentially many public-private keys that are instances of an MQ public key encryption scheme.

Simona Samardjiska, Danilo Gligoroski

Optimal Cache Replacement Policy for Matrix Multiplication

Matrix multiplication is compute intensive, memory demand and cache intensive algorithm. It performs

O

(

N

3

) operations, demands storing

O

(

N

2

) elements and accesses

O

(

N

) times each element, where

N

is the matrix size. Implementation of cache intensive algorithms can achieve speedups due to cache memory behavior if the algorithms frequently reuse the data. A block replacement of already stored elements is initiated when the requirements exceed the limitations of cache size. Cache misses are produced when data of replaced block is to be used again. Several cache replacement policies are proposed to speedup different program executions.

In this paper we analyze and compare two most implemented cache replacement policies First-In-First-Out (FIFO) and Least-Recently-Used (LRU). The results of the experiments show the optimal solutions for sequential and parallel dense matrix multiplication algorithm. As the number of operations does not depend on cache replacement policy, we define and determine the average memory cycles per instruction that the algorithm performs, since it mostly affects the performance.

Nenad Anchev, Marjan Gusev, Sasko Ristov, Blagoj Atanasovski

Multimodal Medical Image Retrieval

Medical image retrieval is one of the crucial tasks in everyday medical practices. This paper investigates three forms of medical image retrieval: text, visual and multimodal retrieval. We investigate by evaluating different weighting models for text retrieval. In the case of the visual retrieval, we focused on extracting low-level features and examining their performance. For, the multimodal retrieval we used late fusion to combine the best text and visual results. We found that the choice of weighting model for text retrieval dramatically influences the outcome of the multimodal retrieval. The results from the text and visual retrieval are fused using linear combination, which is among the simplest and most frequently used methods. Our results clearly show that the fusion of text and visual retrieval with an appropriate fusion technique improves the retrieval performance.

Ivan Kitanovski, Katarina Trojacanec, Ivica Dimitrovski, Suzana Loskovska

Modeling Lamarckian Evolution: From Structured Genome to a Brain-Like System

The paper addresses development of a brain-like system based on Lamarckian view toward evolution. It describes a development of an artificial brain, from an artificial genome, through a neural stem cell. In the presented design a modulon level of genetic hierarchical control is used. In order to evolve such a system, two environments are considered, genetic and behavioral. The genome comes from the genetic environment, evolves into an artificial brain, and then updates the memory through interaction with the behavioral environment. The updated genome can then be sent back to the genetic environment. The memory units of the artificial brain are synaptic weights which in this paper represent achievement motivations of the agent, so they are updated in relation to particular achievement. A simulation of the process of learning by updating achievement motivations in the behavioral environment is also shown.

Liljana Bozinovska, Nevena Ackovska

Numerical Verifications of Theoretical Results about the Weighted $({\cal W}(b);\gamma)-$ Diaphony of the Generalized Van der Corput Sequence

The weighted

$({\cal W}(b);\gamma)-$

diaphony is a new quantitative measure for the irregularity of distribution of sequences. In previous works of the authors it has been found the exact order

${\cal O}\left({1 \over N}\right)$

of the weighted

$({\cal W}(b);\gamma)-$

diaphony of the generalized Van der Corput sequence. Here, we give an upper bound of the weighted

$({\cal W}(b);\gamma)-$

diaphony, which is an analogue of the classical Erdös-Turán-Koksma inequality, with respect to this kind of the diaphony. This permits us to make a computational simu-lations of the weighted

$({\cal W}(b);\gamma)-$

diaphony of the generalized Van der Corput sequence. Different choices of sequences of permutations of the set {0,1, …,

b

 − 1} are practically realized and the

$({\cal W}(b);\gamma)-$

diaphony of the corresponding generalized Van der Corput sequences is numerically calculated and discussed.

Vesna Dimitrievska Ristovska, Vassil Grozdanov

Stability Analysis of Impulsive Stochastic Cohen-Grossberg Neural Networks with Mixed Delays

For impulsive stochastic Cohen-Grossberg neural networks with mixed time delays, we study in the present paper

p

th moment (

p

 ≥ 2) stability on a general decay rate. Using the theory of Lyapunov function,

M

-matrix technique and some famous inequalities we generalize and improve some known results referring to the exponential stability. The presented theory allows us to study the

p

th moment stability even if the exponential stability cannot be shown. Some examples are presented to support and illustrate the theory.

Biljana Tojtovska

How Lightweight Is the Hardware Implementation of Quasigroup S-Boxes

In this paper, we present a novel method for realizing S-boxes using non-associative algebraic structures - quasigroups, which - in certain cases - leads to more optimized hardware implementations. We aim to give cryptographers an iterative tool for designing cryptographically strong S-boxes (which we denote as Q-S-boxes) with additional flexibility for hardware implementation. Existence of the set of cryptographically strong 4-bit Q-S-boxes depends on the non-linear quasigroups of order 4 and quasigroup string transformations. The Q-S-boxes offer the option to not only iteratively reuse the same circuit to implement several different strong 4-bit S-boxes, but they can also be serialized down to bit level, leading to S-box implementations below 10 GEs. With Q-S-boxes we can achieve over 40% area reduction with respect to a lookup table based implementation, and also over 16% area reduction in a parallel implementation of

Present

. We plan to generalize our approach to S-boxes of any size in the future.

Hristina Mihajloska, Tolga Yalcin, Danilo Gligoroski

Comparison of Models for Recognition of Old Slavic Letters

This paper compares two methods for classification of Old Slavic letters. Traditional letter recognition programs cannot be applied on the Old Slavic Cyrillic manuscripts because these letters have unique characteristics. The first classification method is based on a decision tree and the second one uses fuzzy techniques. Both methods use the same set of features extracted from the letter bitmaps. Results from the conducted research reveal that discriminative features for recognition of Church Slavic Letters are number and position of spots in the outer segments, presence and position of vertical and horizontal lines, compactness and symmetry. The efficiency of the implemented classifiers is tested experimentally.

Mimoza Klekovska, Cveta Martinovska, Igor Nedelkovski, Dragan Kaevski

Emotion-Aware Recommender Systems – A Framework and a Case Study

Recent work has shown an increase of accuracy in recommender systems that use emotive labels. In this paper we propose a framework for emotion-aware recommender systems and present a survey of the results in such recommender systems. We present a consumption-chain-based framework and we compare three labeling methods within a recommender system for images: (i) generic labeling, (ii) explicit affective labeling and (iii) implicit affective labeling.

Marko Tkalčič, Urban Burnik, Ante Odić, Andrej Košir, Jurij Tasič

OGSA-DAI Extension for Executing External Jobs in Workflows

Because of the nature of Grid to be heterogeneous and distributed environment, the database systems which should works on Grid must support this architecture. OGSA-DAI is an example of such extensible service based framework that allow data resources to be incorporated into Grid fabrics. On the other side, many algorithms (for example, for data mining) are not built in Java, they aren’t open source projects and can’t be easily incorporated in OGSA-DAI workflows. For that reason, we propose OGSA-DAI extension with new computational resources and activities that allow executing of external jobs, and returning data into OGSA DAI workflow. In this paper, we introduce heterogeneous and distributed databases, then we discuss our proposed model, and finally, we report our initial implementation.

Ǧorgi Kakaševski, Anastas Mishev, Armey Krause, Solza Grčeva

Quasigroup Representation of Some Feistel and Generalized Feistel Ciphers

There are several block ciphers designed by using Feistel networks or their generalization, and some of them allow to be represented by using quasigroup transformations, for suitably defined quasigroups. We are interested in those Feistel ciphers and Generalized Feistel ciphers whose round functions in their Feistel networks are bijections. In that case we can define the wanted quasigroups by using suitable orthomorphisms, derived from the corresponding Feistel networks. Quasigroup representations of the block ciphers MISTY1, Camellia, Four-Cell

 + 

and SMS4 are given as examples.

Aleksandra Mileva, Smile Markovski

Telemedical System in the Blood Transfusion Service: Usage Analysis

Partners FE, KROG and ZTM (alphabetical order) as stated in acknowledgemetns, have developed, manufactured and installed a telemedical system into the blood transfusion service of Slovenia. The system was installed in eleven hospitals, offering blood transfusion services and two blood transfusion centers. The system was in use for nearly seven years. After period of operation, system state snapshot was performed and analyzed. The analysis was focused on per hospital usage preferences through time. Distribution of patients ABO RhD blood typing was also analyzed. In the paper the telemedical system is presented. The method of data collection and data analysis methods are described. The final section presents results accompanied with discussion where economical impact of the telemedical system in comparison to the manual operation is briefly presented.

Marko Meža, Jurij Tasič, Urban Burnik

On the Strong and Weak Keys in MQQ-SIG

In this paper we describe a methodology for identifying strong and weak keys in the recently introduced multivariate public-key signature scheme MQQ-SIG. We have conducted a large number of experiments based on Gröbner basis attacks, in order to classify the various parameters that determine the keys in MQQ-SIG. Our findings show that there are big differences in the importance of these parameters. The methodology consists of a classification of different parameters in the scheme, together with introduction of concrete criteria on which keys to avoid and which to use. Finally, we propose an enhanced key generation algorithm for MQQ-SIG that generates stronger keys and will be more efficient than the original key generation method.

Håkon Jacobsen, Simona Samardjiska, Danilo Gligoroski

An Approach to Both Standardized and Platform Independent Augmented Reality Using Web Technologies

Augmented reality has become very popular in the mobile computing world. Many platforms have emerged, that offer a wide variety of tools for creating and presenting content to the users, however all of them use proprietary technologies that lack standardization. On the other hand the latest developments in the suite of web standards offer all the capabilities required for building augmented reality applications. This paper offers a different approach of implementing augmented reality applications using only web technologies which enables standardization and platform independence.

Marko Ilievski, Vladimir Trajkovik

Dynamics of Global Information System Implementations: A Cultural Perspective

This paper presents the results of an exploration of theoretical views on the role of culture related to dynamics and processes within global information system implementations, as well as the preliminary results of our case study. Several previous studies show that at the intersection of information system implementation and culture, one must address a construct that may exist at one or more organisational levels simultaneously. We look at global information system implementation processes and dynamics from a qualitative perspective whilst observing a situation in its own context in a case study to gain further insights into key cultural elements as variables which are external to information system technology and seemingly emergent elements of global and multi-sited information systems.

Marielle C. A. van Egmond, Dilip Patel, Shushma Patel

Compute and Memory Intensive Web Service Performance in the Cloud

Migration of web services from company’s on-site premises to cloud provides ability to exploit flexible, scalable and dynamic resources payable per usage and therefore it lowers the overall IT costs. However, additional layer that virtualization adds in the cloud decreases the performance of the web services. Our goal is to test the performance of compute and memory intensive web services on both on-premises and cloud environments. We perform a series of experiments to analyze the web services performance and compare what is the level of degradation if the web services are migrating from on-premises to cloud using the same hardware resources. The results show that there is a performance degradation on cloud for each test performed varying the server load by changing the message size and the number of concurrent messages. The cloud decreases the performance to 71.10% of on-premise for memory demand and to 73.86% for both memory demand and compute intensive web services. The cloud achieves smaller performance degradation for greater message sizes using the memory demand web service, and also for greater message sizes and smaller number of concurrent messages for both memory demand and compute intensive web services.

Sasko Ristov, Goran Velkoski, Marjan Gusev, Kiril Kjiroski

Diatom Indicating Property Discovery with Rule Induction Algorithm

In the relevant literature the diatoms have ecological preference organized using rule, which takes into account the important influencing physical-chemical parameters on the diatom abundance. Influencing parameters group typically consist from parameters like: conductivity, saturated oxygen, pH, Secchi Disk, Total Phosphorus and etc. In this direction, this paper aims in process of building diatom classification models using two proposed dissimilarity metrics with predictive clustering rules to discover the diatom indicating properties. The proposed metrics play important rule in this direction as it is in every aspects of the estimating quality of the rules, from dispersion to prototype distance and thus lead to increasing the classification descriptive/predictive accuracy. We compare the proposed metrics by classification and rule quality metrics and based on the results, several set of rules for each WQ and TSI category classes are presented, discussed and verified with the known ecological reference found in the diatom literature.

Andreja Naumoski, Kosta Mitreski

Cryptographic Properties of Parastrophic Quasigroup Transformation

We consider cryptographic properties of parastrophic quasigroup transformation defined elsewhere. Using this transformation we classify the quasigroups of order 4 into three classes: 1) parastrophic fractal; 2) fractal and parastrophic non-fractal; and 3) non-fractal. We investigate the algebraic properties of above classes and present a relationship between fractal and algebraic properties of quasigroups of order 4. We also find a number of different parastrophes of each quasigroup of order 4 and use it to divide the set of all quasigroups of order 4 into four classes. Using these classifications the number of quasigroups of order 4 which are suitable for designing of cryptographic primitives is increased compared to the case where parastrophes are not used.

Vesna Dimitrova, Verica Bakeva, Aleksandra Popovska-Mitrovikj, Aleksandar Krapež

Software Engineering Practices and Principles to Increase Quality of Scientific Applications

The goal of this paper is to propose some software engineering practices and principles that could increase the quality of scientific applications. Since standard principles of software engineering cannot fully engage in enhancing the development process of such applications, finding the right principles and their combination that will improve the quality is real challenge for software engineers. In order to provide more realistic representation of problems in the field of scientific high-performance computing, we conducted a survey where developers of scientific applications in the HP-SEE project answered some key questions about testing methods and conventions they used. Analysis of the results of the responses was major indicator of quality deficiencies in the high-performance scientific software development and that helped us to discern the possible improvements that need to be added in the planning, development and particularly in the verification phases of the software life cycle.

Bojana Koteska, Anastas Mishev

Performance Evaluation of Computational Phylogeny Software in Parallel Computing Environment

Computational phylogeny is a challenging application even for the most powerful supercomputers. One of significant application in this are is Randomized Axelerated Maximum Likelihood (RAxML) which is used for sequential and parallel Maximum Likelihood based inference of large phylogenetic trees. This paper covers scalability testing results on high-performance computers on up to 256 cores, for coarse and fine grained parallelization using MPI, Pthreads and hybrid version and comparison between results of traditional and SSE3 version of RAxML.

Luka Filipović, Danilo Mrdak, Božo Krstajić

Cryptographically Suitable Quasigroups via Functional Equations

The use of quasigroups in cryptography is increasingly popular. One method to find quasigroups suitable for cryptographic purposes is to use identity sieves, i.e. to find appropriate identities and check candidate quasigroups against them. We propose the use of functional equation approach to this problem. Namely, every identity can be considered as a functional equation and solutions to these equations as models of given identities. The identity i.e. functional equation can be transformed into related generalized functional equation which is suitable for algebraic treatment. A new method of solution of quadratic and parastrophically uncancelablle equations is given, using trees and dichotomies (a special equivalence relations). General solution is given by closed formulas. The quasigroups obtained can be further filtered using much simpler conditions.

Aleksandar Krapež

Ontology Supported Patent Search Architecture with Natural Language Analysis and Fuzzy Rules

We have recently witnessed a rapid growth in scientific information retrieval research related to patents. Retrieving relevant information from and about patents is a non-trivial task and poses many technical challenges. In this paper we present a new approach to patent search that combines semantic knowledge and ontologies used to annotate patents processed with natural language processing tools. The architecture uses fuzzy logic rules to organize the annotated patents and achieve more precise retrieval. Our approach to combine proven techniques in a composite architecture showed improved results compared to pure textual based indexing and retrieval. We also showed that results ranked using semantic annotation are better than results based on simple keyword frequencies.

Daniela Boshnakoska, Ivan Chorbev, Danco Davcev

Superlinear Speedup for Matrix Multiplication in GPU Devices

Speedup in parallel execution on SIMD architecture according to Amdahl’s Law is finite. Further more, according to Gustrafson’s Law, there are algorithms that can achieve almost linear speedup. However, researchers have found some examples of superlinear speedup for certain types of algorithms executed on specific multiprocessors.

In this paper we achieved superlinear speedup for GPU devices, which are also categorized as SIMD. We implement a structure persistent algorithm which efficiently exploits the shared cache memory and avoids cache misses as much as possible. Our theoretical analysis and experimental results show the existence of superlinear speedup for algorithms that run on existing GPU device.

Leonid Djinevski, Sasko Ristov, Marjan Gusev

Verifying Liveness in Supervised Systems Using UPPAAL and mCRL2

Supervisory control ensures safe coordination of high-level discrete-event system behavior. Supervisory controllers observe discrete-event system behavior, make a decision on allowed activities, and communicate the control signals to the involved parties. Models of such controllers are automatically synthesized from the formal models of the unsupervised system and the specified safety requirements. Traditionally, the supervisory controllers do not ensure that intended behavior is preserved, but only ensure that undersired behavior is precluded. Recent work suggested that ensuring liveness properties during the synthesis procedure is a costly undertaking. Therefore, we augment state-of-the-art synthesis tools to provide for efficient post-synthesis verification. To this end, we interface a model-based systems engineering framework with the state-based model checker UPPAAL and the event-based tool suite mCRL2. We demonstrate the framework on an industrial case study involving coordination of maintenance procedures of a high-end printer. Based on our experiences, we discuss the advantages and disadvantages of the used tools. A comparison is given of the functionality offered by the tools and the extent to which these are useful in our proposed method.

Jasen Markovski, M. A. Reniers

Recognition of Colorectal Carcinogenic Tissue with Gene Expression Analysis Using Bayesian Probability

According to the WHO research in 2008, colorectal cancer caused approximately 8% of all cancer deaths worldwide. Only particular set of genes is responsible for its occurrence. Their increased or decreased expression levels cause the cells in the colorectal region not to work properly, i.e. the processes they are associated with are disrupted. This research aims to unveil those genes and make a model which is going to determine whether one patient is carcinogenic. We propose a realistic modeling of the gene expression probability distribution and use it to calculate the Bayesian posterior probability for classification. We developed a new methodology for obtaining the best classification results. The gene expression profiling is done by using the DNA microarray technology. In this research, 24,526 genes were being monitored at carcinogenic and healthy tissues equally. We also used SVMs and Binary Decision Trees which resulted in very satisfying correctness.

Monika Simjanoska, Ana Madevska Bogdanova, Zaneta Popeska

Mobile Users ECG Signal Processing

In recent years we have witnessed the growth of a number of multimedia, health, cognitive learning, gaming user applications which include monitoring and processing of the users’ physiological signals, also termed biosignals or vital signs. The acquisition of the biosignals should be non-invasive and should not affect the activities and arousal of the user to achieve relevant results. Developments in mobile devices, e.g. smartphones, tablet PCs, etc. and electrode design have enabled unobtrusive acquisition and processing of biosignals, especially for mobile and non-clinical applications. The paper reviews recently developed non-clinical applications that exploit biosignal information. The paper analyses challenges for digital signal processing that arise from data acquisition from the mobile user are presented with focus on the electrocardiogram (ECG). Influence of the analysed challenges is demonstrated on a selected QRS detection algorithm by using signals from the MIT-BIH Noise stress test database (

nstdb

). Results confirm that algorithms for processing signals of mobile users need a more thorough preprocessing procedure as opposed to simple band-pass filtering.

Emil Plesnik, Matej Zajc

Top-Down Approach for Protein Binding Sites Prediction Based on Fuzzy Pattern Trees

The understanding of the relation between the protein structure and protein functions is one of the main research topics in bioinformatics nowadays. Due to the complexity of the methods for determining protein functions, there are many proteins with unknown functions. Hence, many researchers investigate various computational methods for determining protein functions. We focus on investigating methods for predicting the protein binding sites, and afterwards their characteristics could be used for annotating protein structures. In order to overcome the problem of sensitivity on data changes, we already introduced the fuzzy theory for protein biding sites prediction. In this paper we introduce an approach for detecting protein binding sites using a top-down induction of fuzzy pattern trees. This approach outperforms the existing bottom-up approach for inducing fuzzy pattern trees, and also most of the examined approaches which are based on classical classification algorithms.

Georgina Mirceva, Andrea Kulakov

Component-Based Development: A Unified Model of Reusability Metrics

Inability to use standard software reusability metrics when measuring component reusability makes the choice of reusability metric a challenging problem in software engineering. In this paper, we give a critical review on the existing component reusability metrics and we suggest new attributes to be included as additional conditions when evaluating component reusability. Due to the incompleteness of already proposed metrics and the lack of a universally accepted and transparent model for measuring reusability, we define a unified model that could be adapted to different reusability requirements and various component solutions. In order to improve the process of measuring component reusability we create a prototype for modeling and combining metrics where reusability can be calculated using the existing or newly composed formulas. This prototype will facilitate the process of testing the component reusability and it will allow users easily to select the right component to be integrated in their system.

Bojana Koteska, Goran Velinov

Scalability of Gravity Inversion with OpenMP and MPI in Parallel Processing

The geophysical inversion of gravity anomalies is experimented using the application GMI based on the algorithm CLEAR, run in parallel systems. Parallelization is done using both OpenMP and MPI. The scalability in time domain of the iterative process of inversion is analyzed, comparing previously reported results based in OpenMP with recent data from tests with MPI. The runtime for small models was not improved with the increase of the number of processing cores. The increase of user runtime due to the size of the model resulted faster for MPI compared with OpenMP and for big models the latter would offer better runtime. Walltime scalability in multi-user systems did not improved with the increase of processing cores as result of time sharing. Results confirm the scalability of the runtime at the order of O(N8) relative to the linear size N of 3D models, while the impact of increasing the number of involved cores remains disputable when walltime is considered. Walltime upper limit for modest resolution 3D models with 41*41*21 nodes was 105 seconds, suggesting the need of using MPI in multi-cluster systems and of GPUs for better resolution. The results are in framework of FP7 Infrastructure project HP-SEE.

Neki Frasheri, Betim Çiço

Getting Beyond Empathy

A Tougher Approach to Emotional Intelligence

Emotional intelligence (EI) is considered to be an essential part of a project manager’s skill set, often meaning the difference between project success or failure. Project managers with naturally high EI have a distinct advantage, employing their interpersonal sensitivity and tempered communication style to convince and persuade even the most difficult team members or stakeholders.

However, when negotiating over resources, budgets and schedules that are critical for the project’s success, an overemphasis on openness, trust and willingness to compromise can leave high EI negotiators vulnerable to exploitation, particularly when dealing with more ruthless lower EI counterparts.

This paper examines emotional resilience and tough tactics in negotiation as important elements of EI in project management that paradoxically can be especially difficult for naturally high EI individuals. It assesses current theory and suggests areas for further research.

Andrew James Miller, Shushma Patel, George Ubakanma

Implementation in FPGA of 3D Discrete Wavelet Transform for Imaging Noise Removal

Discrete Wavelet Transform (DWT) is one of the most commonly used signal transformations. This transformation uses wavelets as filters, resulting in a frequency-time-amplitude dependence of the signal. Analyzing the obtained high frequency coefficients gives the possibility to remove the noise from the signal.

In this paper to achieve noise removal from the images we perform 3D-DWT (1D-DWT in three directions) using the biorthogonal Daubechies (9/7) filters. Due to their symmetry this filters are more suitable for image transformation. To reduce the number of multiply and accumulate resulting during signal filtering the Distributed Arithmetic (DA) technique is used taking advantage of the symmetry of the 9/7 filters. VHDL is used as the hardware description language and some modules are programmed in MATLAB. The implementation is mapped in FPGA Virtex-5 XUPV5-LX110T platform.

Ina Papadhopulli, Betim Çiço

IPv6 Transition Mechanisms and Deployment of IPv6 at Ss. Cyril and Methodius University in Skopje

Having in mind the vast amount of IP-enabled devices that are activated each day, it is clear that the deployment of IPv6 is a must. Many companies have already implemented some sort of IPv6 networks, but implementation of IPv6 is not a small task. One cannot just switch from IPv4 to IPv6. There are several strategies and different deployment models involved in the process, and there is no “one size fits all” solution. However, there are some methods that are more recommended than others. In this paper, we give an overview of the IPv6 transition mechanisms and their deployment in a specific case study. Our aim is to investigate the pros and cons of the transition mechanisms and decide on the best solution to the given problem.

Goce Gjorgjijoski, Sonja Filiposka

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise