Skip to main content
Top

2017 | Book

Cybernetics and Mathematics Applications in Intelligent Systems

Proceedings of the 6th Computer Science On-line Conference 2017 (CSOC2017), Vol 2

Editors: Radek Silhavy, Roman Senkerik, Zuzana Kominkova Oplatkova, Zdenka Prokopova, Petr Silhavy

Publisher: Springer International Publishing

Book Series : Advances in Intelligent Systems and Computing

insite
SEARCH

About this book

This book presents new methods for and approaches to real-world problems as well as exploratory research describing novel mathematics and cybernetics applications in intelligent systems. It focuses on modern trends in selected fields of technological systems and automation control theory. It also introduces new algorithms, methods and applications of intelligent systems in automation, technological and industrial applications.

This book constitutes the refereed proceedings of the Cybernetics and Mathematics Applications in Intelligent Systems Section of the 6th Computer Science On-line Conference 2017 (CSOC 2017), held in April 2017.

Table of Contents

Frontmatter
Cost-Effective Computational Modeling of Fault Tolerant Optimization of FinFET-Based SRAM Cells

In the area of computational memory management, energy efficiency and proper utilization of memory cell area is being constantly investigated. However, record of research manuscript in this regards are quite less compared to other related research topic in computer science. We reviewed existing techniques of upgrading the performance of FinFET-based SRAM and found that adoption of computational modeling for optimization is quite a few to find. Hence, we model the problem of leakage power minimization as linear optimization problem and develop a technique that ensures better fault tolerance operation of FinFET-based SRAM using enhanced particle swarm optimization. We minimize the computational complexity of the algorithm compared to conventional evolutionary technique and other performance upgrading system found in recent times. Our algorithm has better control over convergence rate, energy dissipation, and capability to ensure fault tolerance.

H. Girish, D. R. Shashikumar
Application of Risk Theory Approach to Fuzzy Abduction

In this article, learning under the absence or incompleteness of some facts or premises about the problem domain is considered. This task does not fall under semi-supervised learning in the classical sense, because there it is assumed that the target signals are known and correct. The assumption of incompleteness is, however, natural for pattern recognition, e.g. for medical diagnostics.In such a situation, it is natural to base a learning process on abductive reasoning instead of induction or transduction. It is then important to have a quality criterion for the state of knowledge on the object to be studied.Previously, to reconstruct missing training data, a fuzzy logical approach to the application of the abductive reasoning method was studied. Now, fuzzy abduction is considered from a risk-theoretical point of view.As a result, in addition to the fuzzy abduction method, a general algorithm is suggested for finding the true state of the object to be studied in the case when known hypotheses about its state are mutually far from each other.

V. N. Tsypyschev
Enhanced TDS Stability Analysis Method via Characteristic Quasipolynomial Polynomization

Time delay systems own infinite spectra which cannot be simply analyzed or controlled. A way how to deal with this task consists of an approximation of the characteristic quasipolynomial by a polynomial that can be further handled via conventional tools. This contribution is aimed at an improved extrapolation method transforming a retarded quasipolynomial into a corresponding polynomial. It is equivalent to the finding of a finite-dimensional model related to an infinite-dimensional one describing a time delay system. The approximating polynomial is then used to analyze the dependence of delay values to exponential stability of the system. Two ideas are adopted and compared here; namely, a linear interpolation method via the Regula Falsi method, and the root Newton’s method with root tendency. The whole procedure is simply implementable by using standard software tools. To demonstrate this issue, a numerical example performed in MATLAB® & Simulink® environment is given to the reader.

Libor Pekař
Dissipativity of Multistep Runge–Kutta Methods for Nonlinear Neutral Delay Integro Differential Equations with Constrained Grid

This paper is concerned with the numerical dissipativity of multistep Runge-Kutta methods for nonlinear neutral delay-integro-differential equations. We investigate the dissipativity properties of $$ (k,l) $$-algebraically stable multistep Runge-Kutta methods with constrained grid. The finite-dimensional and infinite-dimensional dissipativity results of $$ (k,l) $$-algebraically stable multistep Runge-Kutta methods are obtained.

Haiyan Yuan, Cheng Song
Evaluation of Uncertainties of ITS-90 by Monte Carlo Method

The article briefly describes the approach of evaluating calibration using the adaptive method of Monte Carlo and the subsequent validation by the law of uncertainties when applied on the primary realization of the temperature scale, with emphasis on measurement with standard platinum resistance thermometer (SPRT) illustrated by the range (0 ÷ 660) °C of the international temperature scale (ITS-90).

Peter Sopkuliak, Rudolf Palenčár, Jakub Palenčár, Emil Suroviak, Jaromír Markovič
Exploiting Model Continuity in Agent-Based Cyber-Physical Systems

This work develops an agent and control based approach for modeling, analysis and implementation of Cyber-Physical Systems (CPSs). Novel in this software engineering approach is a support to model continuity, that is the possibility of transitioning a same model from property analysis based on simulation, down to design, implementation and real-time execution. The paper introduces the basic concepts of the methodology, illustrates some implementation issues and presents a case study concerned with power management in a smart micro-grid.

Domenico L. Carní, Franco Cicirelli, Domenico Grimaldi, Libero Nigro, Paolo F. Sciammarella
Design of Processor in Memory with RISC-modified Memory-Centric Architecture

The technological developments in the areas of computer hardware and software resulted in a wide range of fast and cheap single- or multi-core processors, compilers, operating systems and programming languages, each with its own benefits and drawbacks, but with the ultimate goal to increase overall computer system performances. Although the number of transistors on a chip continues to double roughly every two years, there is still difficult to improve the performance of sequential processors, and even of the parallel multi-core and multi-processor shared-memory systems. The main reason for this resides in the ever-increasing gap between processor and memory speeds in the classical Von Neumann’s computer model. Therefore in this paper we propose a novel memory-centric approach of computing in a RISC-modified processor core that includes on-chip memory, which can be directly accessed, without the use of general-purpose registers (GPRs) and cache memory. Considering that the proposed RISC-modified core allows for a high on-chip memory bandwidth and low latency, we examine its performances in applications with different arithmetical intensity (dense matrix multiplication, Fast Fourier Transform - FFT, Partial Differential Equations - PDEs), according to the Roofline model. The results show that the proposed memory-centric RISC-modified core outperforms the initial RISC-based MIPS processor core for problems with medium or large arithmetical intensity.

Danijela Efnusheva, Aristotel Tentov
CARIC: A Novel Modeling of Combinatorial Approach for Radiological Image Compression

The contribution of several compression algorithms plays a significant role in minimizing the size of multiple radiological images from last decade. However, a closer look into existing work will show that there is a big trade-off between compression performance and data quality during the reconstruction process. We review the existing research work being carried out and briefs such problems and trade-off. This paper presents a framework called as CARIC (Combinatorial Approach for Radiological Image Compression) that uses a combinatorial approach of both lossy and lossless compression schemes unique in any radiological image. Using maximum numbers and modalities of different radiological images, we also compare CARIC with some recent and relevant work of compression to find that CARIC offers better image compression ratio along with a great balance among quality of the reconstructed image and faster response time.

M. Lakshminarayana, Mrinal Sarvagya
Torque Characteristics of Antagonistic Pneumatic Muscle Actuator with an Oval Cam

The current equipment for generating of the rotational motion by antagonistic pneumatic muscle actuator is standardly solved with a circular pulley, on which a flexible strip is strung and its ends are connected with artificial muscles. However, in this solution, torque and stiffness of such actuator decrease with increasing rotation of the actuator arm. This is due to the nonlinear decrease of muscles forces according to their contraction. By application of the oval cam, a smaller decrease in torque and thus the greater and symmetrical stiffness of the actuator with pneumatic artificial muscles can be obtained.

Mária Tóthová, Alena Vagaská
Adaptive Control System of a Robot Manipulator Based on a Decentralized Position-Dependent PID Controller

The paper describes an approach to adaptive feedback control of a robot manipulator, based on partitioning of the joint space into segments. Within each segment the robot is controlled as a decoupled linear system by means of conventional PID controllers. To achieve continuity of control variables the segments are represented as fuzzy sets. The controller settings are adapted by online identification from past measurements of position and control signals.

Jan Cvejn, Jiří Tvrdík
Possibilities of Process Modeling in Pedagogical Cybernetics Based on Control-System-Theory Approaches

This paper tries to extent the connection between the technical and pedagogical cybernetics. Particularly, the process-modeling possibilities from the control-system theory are applied to the pedagogical-research area in this paper. In the pedagogy, the cybernetics processes are not further mathematically modeled, because the classical approaches are not widely based on mathematical background of control-system theory. The models are usually presented in a schematic form. In the other case, the measured variables can be described using a set of statistical parameters in the statistical research. The feedback-control model is established in the pedagogical cybernetics. For the purposes of pedagogical research, this paper presents the possibilities of process modeling using control-system-theory approaches. The verification of the presented approach can be provided using statistical methods.

Tomas Barot
Calibration of Low-Cost Three Axis Magnetometer with Differential Evolution

The magnetometers are used in wide range of engineering applications. However, the accuracy of magnetometer readings is influenced by many factors such as sensor errors (scale factors, non-orthogonality, and offsets), and magnetic deviations (soft-iron and hard-iron interference); therefore, the magnetic calibration of magnetometer is necessary before its use in specific applications. This research paper describes calibration method for three axis low-cost MEMS (Micro-Electro-Mechanical Systems) magnetometer. The calibration method uses differential evolution (DE) algorithm for the determination of the transformation matrix (scale factor, misalignment error, and soft iron interference) and bias offset (hard-iron interference). The performance of this method is analysed in experiment on three axis low-cost magnetometer LSM303DLHC and then compared to the traditional method (least square ellipsoid fitting method). The magnetometer readings were obtained while rotating the sensor around arbitrary rotations. The experimental results show that the calibration error is least using DE.

Ales Kuncar, Martin Sysel, Tomas Urbanek
The Technique of Multi-criteria Decision-Making in the Study of Semi-structured Problems

In the article it is proposed to use additional information from the decision maker (DM) for removing the criteria of uncertainty when making decisions in the framework of semi-structured problems, which is characterized by incomplete information, numerous qualitative and the quantitative selection criteria. This information is represented by the production models and processed by using the methods of the experiment planning theory and parametric fuzzy measures. The essence of the proposed methodology consists of sharing the ideas of verbal analysis of the decisions (simple and complex basic situation of a survey) and procedures of bringing data qualitative indicators to the quantitative ones, which is based on using the mathematical apparatus of the theory of fuzzy sets, relations and measures, and the theory of experiment planning. A parametric fuzzy measure has been constructed in order to reduce the number of calls to the DM in the process of the expert survey and the consistency control of his statements in the set of the production rules that represent basic situation of the survey. This parametric fuzzy measure allows computing the DM’s preferences on criteria for achieving the goal set for making the management decisions.

Alexander N. Pavlov, Dmitry A. Pavlov, Alexey A. Pavlov, Alexey A. Slin’ko
AnyLogic-Based Discrete Event Simulation Model of Railway Junction

Nowadays, increase of competitive ability and effectiveness of railway transportation network of the Russian Federation is an important problem, which solution requires the modernization of system elements such as railway junctions. Modern computer technologies allow assisting in correct project decision-making, however, there is a necessity of development of appropriate software instruments for analysis. In this paper, mathematical model of Ekaterinburg railway junction, developed by means of the instrument of simulation modeling AnyLogic, is presented. The model was constructed using discrete event approach and queuing network technique that give an opportunity to estimate indices of railroad operation and discover bottlenecks in structure of the junction. Calculation of the estimation errors of output parameters was conducted on the basis of the simulation experimental results, which analysis allowed making conclusion about the adequacy of the model.

Alexander Lyubchenko, Stanislav Bartosh, Evgeny Kopytov, Alexander Shiler, Askar Kildibekov
The Parameters List for Multihop Wireless Networks Cross-Layer Routing Metric

Multihop wireless networks are the promising direction of communication networks. The main problem of such networks due to links’ instability is to find the best route. Different parameters are used to route estimation. The paper presents an attempt to estimate the different parameters influence on wireless multihop networks performance. The parameters list is formed by different authors past experience generalization of cross-layer routing metrics development. We provide the results of Ns-3 experiments for estimation of different parameters influence on network performance. In particular, parameters list that are planned to be considered during the design of routing metrics is proposed.

I. O. Datyev, A. A. Pavlov, M. G. Shishaev
An Improved Active Queue Management Algorithm for Time Fairness in Multirate 802.11 WLAN

In multirate 802.11 wireless local area network (WLAN), time unfairness is an inherent problem that slow stations occupy more time to transfer data and leave less time for fast stations, which is so called performance anomaly. The paper proposes an improved active queue management (IAQM) algorithm for fairly sharing network resources among all contending stations. Meanwhile, by setting different queue length and drop rate for each data flow with different destinations going through the access point (AP) according to their transmission rate, so that each station guarantees equal channel usage time. Therefore, the time fairness can be achieved and aggregate throughput can be improved. Both analytical and simulation results are provided to validate the effectiveness of the proposed IAQM algorithm, which can achieve good time fairness and a 30% improvement in aggregate throughput.

Jianjun Lei, Yingwei Wu, Xu Zhang
Control Theory Application to Complex Technical Objects Scheduling Problem Solving

We present a new model for optimal scheduling of complex technical objects (CTO). CTO is a networked controlled system that is described through differential equations based on a dynamic interpretation of the job execution. The problem is represented as a special case of the job shop scheduling problem with dynamically distributed jobs. The approach is based on a natural dynamic decomposition of the problem and its solution with the help of a modified form of continuous maximum principle blended with combinatorial optimization.

Boris Sokolov, Inna Trofimova, Dmitry Ivanov, Alekcey Krylov
Protective Correction of the Flow in Mechanical Transport System

In this paper, we will examine the problem of cost minimizing of the cargo moving in mechanical transport systems. Cost Index includes component, which reflects the loss on disaster recovery. We analyze ways to reduce the possibilities of accident initiation. We also consider the features of the adaptive routing. We define the conditions under which it can be used to manage by the intensity of the flow through separate network segments. We propose adaptive routing algorithm with protective correction of flow. The essence of the algorithm consists in installing high value of the cost of the transfer on separate segments. This value is fixed as a periodic event in the specified time window. We consider the factors that determine the value of protective correction of the parameters. We have identified the problem of finding the best values. We have proposed the mechanical transport system structure, which includes an intelligent module of instruction issue of protective correction. We consider the work principles of intelligent module based on case analysis with a hierarchical storage system of precedents.

Stanislav Belyakov, Marina Savelyeva
Efficient MapReduce Matrix Multiplication with Optimized Mapper Set

The efficiency of matrix multiplication is a popular research topic given that matrices compromise large data in computer applications and other fields of study. The proposed schemes utilize data blocks to balance processing overhead results from a small mapper set and I/O overhead results from a large mapper set. Balancing between the two processing steps, however, consumes time and resources. The proposed technique uses a single MapReduce job and pre-processing step. The pre-processing step reads an element from the first array and a block from the second array prior to merging both elements into one file. The map task performs the multiplication operations, whereas the reduce task performs the sum operations. Comparing the proposed and existing schemes reveals that the proposed schemes more efficiently consume time and memory.

Methaq Kadhum, Mais Haj Qasem, Azzam Sleit, Ahamd Sharieh
Control of Time-Delay Systems with Parametric Uncertainty via Two Feedback Controllers

The main goal of this contribution is to present the application of polynomial approach-based design of two feedback controllers to time-delay plants with parametric uncertainty. Robust stability of designed control systems is analyzed via the families of their closed-loop characteristic quasi-polynomials, more specifically by means of the graphical method which combines the value set concept with the zero exclusion condition. A second order plus time delay plant with uncertain parameters is robustly stabilized (or intentionally got to the robust stability border) in the simulation example.

Radek Matušů, Roman Prokop
Maze Navigation on Ball & Plate Model

Today’s CCD or CMOS image sensors are advanced enough to satisfy the need for accurate object detection and tracking. This leads to implementation of computer vision into industry, transportation, medicine, robotics and other sectors. The aim of this paper is to present steps needed to determine correct path through the maze constructed on a plate and navigate a ball along this path. Image processing techniques used here are simple enough to understand, so students can easily implement them to further extend educational capabilities of Ball & Plate model. The paper also shows the use of watershed transform, which can be extended for similar problems. The added maze thus provides excellent application for the model and simulates real-world issues in research and development.

Lubos Spacek, Vladimir Bobal, Jiri Vojtesek
AEOC: A Novel Algorithm for Energy Optimization Clustering in Wireless Sensor Network

The area of Wireless Sensor Network (WSN) has witnessed various research contributions in the past decade for mitigating the issues of energy dissipation to ensure energy efficient routing and clustering. It was found that the existing technique doesn’t have productive supportability towards addressing the energy efficiency as well as enhancing clustering performance in WSN. Hence, this paper presents a novel idea where the energy efficiency as well as clustering optimization was carried out by incorporating the selection mechanism of nodes. The paper also discusses the architecture and algorithm of the proposed technique with briefing of methodology that was adopted to accomplish the work. Finally, the paper exhibits the outcomes of the study very discretely and highlights the better clustering perspective of performing benchmarking the technique with existing standards.

C. Parvathi, Suresha
Large Networks of Diameter Two Based on Cayley Graphs

In this contribution we present a construction of large networks of diameter two and of order $$\frac{1}{2}d^2$$ for every degree $$d\ge 8$$, based on Cayley graphs with surprisingly simple underlying groups. For several small degrees we construct Cayley graphs of diameter two and of order greater than $$\frac{2}{3}$$ of Moore bound and we show that Cayley graphs of degrees $$d\in \{16,17,18,23,24,31,\dots ,35\}$$ constructed in this paper are the largest currently known vertex-transitive graphs of diameter two.

Marcel Abas
Integrated S-AODV and DEL-CMAC Algorithm of Spatio Temporal Cross-Layer in Sensor Network

Cooperative Medium Access Protocol (CMAC) has been found to contribute towards energy efficiency among the sensor nodes; however, still there is less research work to prove their applicability on the sensor nodes when adhoc-based routing is adopted. The existing research work is reviewed towards utilizing cross-layer based approach for maximizing the layer interactivity and to enhance the computational efficiency in Wireless Sensor Network (WSN). Hence, the proposed system addresses increase in efficiency by incorporating a novel combinatorial policy of on demand adhoc routing with CMAC scheme over multihop network using cooperative transmission mechanism to bridge the communication gap between network layer and MAC layer. The outcome of the proposed system is found to excel better quality-of-service (QoS) performance in comparison to existing MAC protocol frequently used in WSN.

Shoba Chandra, Suresha Talanki, Kiran Kumari Patil
Robust Constrained Control: Optimization of 1 vs. 2 Closed-Loop Poles

This paper presents optimization-based technique to design robust control system in case of control input limitations. The methodology uses the algebraic approach resulting in polynomial equations and a pole-placement problem to be solved. Closed-loop poles are optimized numerically with the help of the MATLAB computing system and its toolboxes for simulation and optimization. Suitable performance criteria and a procedure are suggested for this purpose. The case of 1 and 2 parameters optimization is illustrated on a nonlinear servo-system control design using both simulation and real-time experiments. Presented results prove the proposed methodology.

Frantisek Gazdos
Machine Learning Approaches to Electricity Consumption Forecasting in Automated Metering Infrastructure (AMI) Systems: An Empirical Study

In a Smart grid, implementation of value-added services such as distribution automation (DA) and Demand Response (DR) [1] rely heavily on the availability of accurate electricity consumption forecasts. Machine learning based forecasting systems, due to their ability to handle nonlinear patterns, appear promising for the purpose. An empirical evaluation of eight machine learning based systems for electricity consumption forecasting, based on Extreme Learning machines (ELM), Ensemble Regression Trees (ERT), Artificial Neural Network (ANNs) and regression is presented in this study. Forecasting systems thus designed, are validated on consumption data collected from 5275 users. Result indicate that ELM based electricity consumption forecasting systems are not only more accurate than other systems considered, they are considerably faster as well.

A. Jayanth Balaji, D. S. Harish Ram, Binoy B. Nair
Simulation of a Single-Component System Using the Trajectories Method Taking into Account the Scheduling Preventive Maintenance

This article describes the functioning of a single-component system based on scheduling preventive maintenance. Its semi-Markov model using the trajectories method is constructed considering the following assumption: the time of preventive maintenance is much less than the time between failures and the time of system recovery. In addition, as a rule, the preventive maintenance is conducted out of working time fund, which justifies the assumption of its momentary execution. The employed method of trajectories is applicable only for the discrete systems. Thus, while modeling the current system, which is the system with a continuous phase space of states, the algorithm of phase consolidation for the transition to a system with a discrete phase space of states was applied. This method allows to obtain the exact solution of the task of finding the distribution function in the Laplace images, as opposed to the known methods. In order to validate the obtained results there has been a comparison of mathematical expectations of time staying in subset of operational conditions obtained due to the distribution function detected and based on the theorem of the average stationary stay time of the semi-Markov process in the subset of states.

Mikhail V. Zamoryonov, Vadim Ya. Kopp, Olga V. Chengar, Yuri L. Rapatskiy
Analysis of the IoT WiFi Mesh Network

This paper presents a conception of designing wireless sensor networks in mesh topology that perform their IoT tasks applying popular WiFi standards. Cheap IoT modules involve compromise between reliability and the price. Phenomena that occur in real wireless sensor network depends on many factors that are sometimes not well defined. Statistical analysis of the packet delays and failure rates for different scenario paths in our experimental network helps to identify anomaly nodes.

Piotr Lech, Przemysław Włodarski
The Experience of Building Cognitive User Interfaces of Multidomain Information Systems Based on the Mental Model of Users

The article describes the methodological basis of the synthesis of cognitive interfaces for multidomain information systems. A definition of the cognitive user interface as well as approaches to its formal assessment are given. Special attention is paid to the semantic and perceptual aspects of cognitive interfaces, the concept of relevance, pertinence, user stereotypes. Application of the user experience in the task of information retrieval is described. One of the possible ways of obtaining and record-keeping of user preferences is constructing a model of user interests in the form of a formalized mental model. An approach which makes it possible to increase the relevance of the search results is presented.

M. G. Shishaev, V. V. Dikovitsky, L. V. Lapochkina
Implementation of Synthetic Aperture Radar and Geoinformation Technologies in the Complex Monitoring and Managing of the Mining Industry Objects

Design, planning and management of opencast and underground mining require safety control of mining operations. Geodynamic monitoring of mining areas is necessary for operational forecasting and prevention of dangerous deformation processes. The identification of geodynamic active zones and forecasting of geodynamic risks are based on systematic observations of the surface and mining facilities. A promising method of obtaining timely spatial information to solve the problems mentioned is the satellite radar imagery. The integration of radar products and intelligent information systems improves the efficiency and accuracy of data analysis. The paper presents the methods of radar image processing in order to conduct comprehensive monitoring of the Earth’s surface and infrastructure in mining enterprises. For efficient use of thematic processing products the results were placed on the web server in the information-analytical system “RegionView” providing distributed access to spatial data through the web interface and standard protocols.

Maria R. Ponomarenko, Ilya Yu. Pimanov
Lightning Impulse Voltage Evaluation

This research studied and developed software for lightning impulse voltage parameters evaluation. Full lightning impulse voltage of 9 cases in TDG program had been used as references for software tested. This software created 2 types of voltage waveform which are mean curve and approximate real curve. Kalman Filter had been used for mean curve. QR algorithm had been used for approximate real curve. The experimental results show that the purported software can evaluate all lightning impulse parameters as IEC 61083-2 standards in every case.

Nopphadon Khodpun, Krisada Vilailak
Pattern Recognition for Predictive Analysis in Automotive Industry

Predictive maintenance (PdM) techniques are designed to help identify the condition of devices in order to predict when maintenance should be performed. The ultimate goal of PdM is to perform maintenance at a scheduled point in time when the maintenance activity is most cost-effective and before the equipment loses performance within a threshold. Currently, reducing service costs and losses due to downtime is one of the ways to increase your profits and success in the market. We tried to identify problem messages and failures from the manufacturing data example set from car body work. Two different data sets were joined and we designed a process to identify message and failure alerts preceding errors.

Veronika Simoncicova, Lukas Hrcka, Lukas Spendla, Pavol Tanuska, Pavel Vazan
Methodology and Structure Adaptation Algorithm for Complex Technical Objects Reconfiguration Models

Complex-technical object (CTO) is the main object of investigation. In the paper are shown how the problem of CTO functional reconfiguration can be sovled in the terms of proposed CTO structural dynamics control theory. General formal description of CTO structure-dynamics control (SDC) including its functional reconfiguration is suggested. New approach to structure adaptation of CTO functional reconfiguration models is developted. This approach is based on concept of integrated modeling and simulation.

Anton Pashchenko, Pavel Okhtilev, Semen Potrysaev, Yury Ipatov, Boris Sokolov
Characterization of the Current Conditions of the ITSA Data Centers According to Standards of the Green Data Centers Friendly to the Environment

Data Center is a specially conditioned space to house all equipment and systems. When it points especially conditioning it means that a data center is a place that you have the following installed: air conditioning, stabilized power supply, uninterrupted power supply, structured wiring, fire prevention systems, access control systems surveillance cameras, alarms fire contraindications, temperature and humidity control. The purpose of this document is to explain to the reader in detail how to improve the condition and use of all devices must necessarily contain the data center, to make it more friendly to the environment and at the same time it means a reduction in the cost operation for the company. The results of measurements of data centers of the University Institution ITSA will be shown as an example.

Leonel Hernandez, Genett Jimenez
Game-Based Learning: How to Make Math More Attractive by Using of Serious Game

The dynamics of change in the field of information technology open the doors to use of new methods of education. Communication bandwidth (networks), computers, laptops, tablets and mobiles (hardware) and new generation of operating systems, program languages and game engines (software) offer new possibilities. Devices and networks are still improved, getting faster and prices fall. These attributes started a new area: game-based learning. In our paper we discuss how to make math more attractive by serious game. Math is subject which is essential in many fields, including natural science, engineering, medicine etc. Generally, math is not very popular with pupils. Question is: how could we change it, but answer is actually not easy, but one of the possibilities is serious games for math. This is the reason why we have decided design and development a serious game focused on math - for reader’s attention and math word problems. A multiplatform game engine Unity 3D was used for development of the game and Blender’s tool for real time projects for simulations. We describe steps of design and development from graphical environment, design of the buildings to music and sounds.

Marián Hosťovecký, Martin Novák
Intelligent Telemetry Data Analysis of Small Satellites

The paper presents intelligent telemetry data analysis software module and methods of onboard equipment of small satellites. The suggested software module consists of feature selection, data preprocessing, clustering and predicting software components. The software components are based on the genetic algorithm based feature selection method, dynamic streaming clustering method, neural Kohonen self-organizing map and image processing based clustering and predicting methods for telemetry data of onboard equipment of small satellites. The computational experiments and testing of developed methods and software tools were performed on the processed telemetry data from the navigation device of onboard equipment of a small satellite and showed enough high efficiency and good results.

Vadim Skobtsov, Natalia Novoselova, Vyacheslav Arhipov, Semyon Potryasaev
A Static Calibration of MEMS Accelerometers

The paper describes micro electro mechanical systems (MEMS) accelerometers and their calibration making reliable and accurate measurements. The first part discusses the physics of acceleration and accelerometers. The next part describe one of the calibration techniques. The final section shows static calibration of a 3D digital linear acceleration sensor in LSM303D.

Martin Sysel
A Survey of Optimization Techniques for Distributed Job Shop Scheduling Problems in Multi-factories

Distributed Job shop Scheduling Problem is one of the well-known hardest combinatorial optimization problems. In the last two decades, the problem has captured the interest of a number of researchers and therefore various methods have been employed to study this problem. The scope of this paper is to give an overview of pioneer studies conducted on solving Distributed Job shop Scheduling Problem using different techniques and aiming to reach a specified objective function. Resolution approaches used to solve the problem are reviewed and a classification of the employed techniques is given.

Imen Chaouch, Olfa Belkahla Driss, Khaled Ghedira
Big Data Process Advancement

Information in this era is thriving to be maintained on a verity of sources. Data is available in different patterns and forms. Combining and processing all different types of datasets in a heterogeneity database is near to impossible, specifically, if the information is moving and changing on many different sources on a continuous basis. Information is represented in different modules and nowadays processing data from various sources can lead to critical risk assessment results. Big Data is a concept introduced to cover the use of different techniques serving the desired goals by processing the given information. Processing huge amount of data is a big challenge for a single machine to perform, in this paper we will discuss this idea and demonstrate a module of clustered machines to work as a single entity towards achieving the desired tasks while working on parallel cohesively.The idea of a solution to combine different machines of different specification processing and power in a single cluster and then distributing input data of various data fairly to most powerful processing and well-designed data type machine in the cluster.Distribution of input data and storing mechanism will depend on machine specification, data processing, the power of a machine, balance loading and data type.We present our suggestion solving method by using Event-B based approach, the Key features of Event-B are the use of set theory as a modelling notation and we propose using the Rodin modelling tool for Event-B that integrates modelling and proving.

Roman Jasek, Said Krayem, Petr Zacek
Proving the Effectiveness of Negotiation Protocols KQML in Multi-agent Systems Using Event-B

Multi-Agents Systems (MAS) provide a good basis to build complex systems and in MAS a negotiation is a key form of interaction that enables agents to arrive at a final agreement. We present an event-B based approach to reasoning about a negotiation protocols in multi-agent systems (MAS), Key features of Event-B are the use of set theory as a modeling notation and it is a formal method that can be used in the development of reactive distributed systems and we propose using the Rodin modeling tool for Event-B that integrates modeling and proving.

Ammar Alhaj Ali, Roman Jasek, Said Krayem, Petr Zacek
Correlation Analysis of Decay Centrality

The decay centrality (DEC) metric for a vertex weighs the distance of the vertex to the rest of the vertices on the basis of a decay parameter (0 < δ < 1). In this paper, we analyze a suite of 48 real-world networks and compute the DEC values for δ values ranging from 0.01 to 0.99 for each of these networks. We explore the presence of a particular or range of δ values within which there is a very strong positive correlation (Pearson’s correlation coefficient of 0.8 or above) between DEC and each of the four commonly studied centrality metrics: degree centrality (DEG), eigenvector centrality (EVC), betweenness centrality (BWC) and closeness centrality (CLC). We observe 0.01 to be the most appropriate δ value for which there exists a very strong positive correlation between DEC and each of DEG, EVC and BWC for at least 50% of the networks.

Natarajan Meghanathan
Virtual Lab: An Adequate Multi-modality Learning Channel for Enhancing Students’ Perception in Chemistry

This paper investigates the instructional effectiveness of learning modalities towards enhancing learners’ conceptual understanding of crystal field theory (CFT) using a multimedia rich platform such as Virtual Laboratory. The virtual laboratory in the present work has integrated modalities such as graphics, images, animations, videos and simulations for simultaneous demonstration of concepts related to CFT. This study aims to evaluate the impact of these modalities on the learning outcomes of visual, auditory and kinesthetic learners irrespective of their preferred learning modality. A case study of 524 undergraduate chemistry students from four higher educational institutes was carried out as part of the evaluation. Assessment of knowledge, conceptual understanding, application and analysis with and without the virtual lab platform was done using assessment quizzes. Results showed that students that underwent a combination of visual, auditory and kinesthetic learning modalities within virtual lab environment had significantly improved their understanding resulting in better performance. The study also characterizes the effectiveness of integrated modalities on the enhancement of learning amongst the three types of learners.

Krishnashree Achuthan, Smitha S. Murali
LDPC Binary Vectors Coding Enhances Transmissions and Memories Reliability

The paper interests in the research and implementation of memory coded information by modulation with highly effective concatenated codes, represented by LDPC (Low Density Parity Check) codes. Parameters optimization of coding is solved with respect to its implementation by semicustom integrated circuit of gate array and highly effective ARM processor created as SoC (System on Chip). Vendors offer a lot of types programmable circuits and software environments for this technique now. Basic modelling technique is model creation and simulation special architectures described by modeling in C/C++, and SystemC languages. The basic idea – the instruction set extension of the ARM processor – is realized by freely programmable gates as special “data flow” controlled execution unit and additional instruction decoder.

Tomas Knot, Karel Vlcek
Backmatter
Metadata
Title
Cybernetics and Mathematics Applications in Intelligent Systems
Editors
Radek Silhavy
Roman Senkerik
Zuzana Kominkova Oplatkova
Zdenka Prokopova
Petr Silhavy
Copyright Year
2017
Electronic ISBN
978-3-319-57264-2
Print ISBN
978-3-319-57263-5
DOI
https://doi.org/10.1007/978-3-319-57264-2

Premium Partner