Skip to main content

Über dieses Buch

This proceedings volume contains selected revised and extended research articles written by researchers who participated in the World Congress on Engineering and Computer Science 2015, held in San Francisco, USA, 21-23 October 2015. Topics covered include engineering mathematics, electrical engineering, circuits, communications systems, computer science, chemical engineering, systems engineering, manufacturing engineering, and industrial applications.

The book offers the reader an overview of the state of the art in engineering technologies, computer science, systems engineering and applications, and will serve as an excellent reference work for researchers and graduate students working in these fields.



Chapter 1. Estimate the Impact of Different Heat Capacity Approximation Methods on the Numerical Results During Computer Simulation of Solidification

The article presents the results of numerical modeling of the solidification process. We focused on comparing the results of calculations for various methods of the effective thermal capacity approximation used in the apparent heat capacity formulation of solidification. Apparent heat capacity formulation is one of the enthalpy formulations of solidification, which allows effective simulation of casting solidification with the one domain approach. In particular, we have shown that the choice of one of four tested methods of approximation does not significantly affect the results. Differences in the resulting temperature did not exceed a few degrees. However, it can affect the time needed to execute the numerical simulations. All presented numerical algorithms were implemented in our in-house software. The software is based on very efficient and scalable libraries that ensures applicability to real world engineering problems.

Robert Dyja, Elzbieta Gawronska, Andrzej Grosser, Piotr Jeruszka, Norbert Sczygiol

Chapter 2. Analysis of Systemic Risk: A Dynamic Vine Copula-Based ARMA-EGARCH Model

After the nightmare of 2008 financial subprime crisis, systemic risk has become a major concern for policymakers and supervisory authorities. In this paper, we use S&P 500 sector indices and S&P 500 index to be our components and system, respectively. A dynamic model for analyzing each U.S. Equity sector’s risk contribution (VaR ratio), the ratio of the Value-at-Risk of a sector to the Value-at-Risk of the system (S&P 500 Index), with dynamic vine Copula-based ARMA-EGARCH (1, 1) modeling is presented. Vine copula modeling not only has the advantage of extending to higher dimensions easily, but also provides a more flexible measure to capture an asymmetric dependence among assets. We investigate systemic risk in 10 S&P 500 sector indices in the U.S. stock market by forecasting one-day ahead VaR and one-day ahead VaR ratio during the 2008 financial subprime crisis. Our evidence reveals vine Copula-based ARMA-EGARCH (1, 1) is the appropriate model to forecast and analyze systemic risk.

Kuan-Heng Chen, Khaldoun Khashanah

Chapter 3. MOESP_AOKI_VAR: Algorithm for Space State Identification of Non-stationary Multivariable Noisy Linear Systems

The main objective of this work is to develop a recursive algorithm for identification in the state-space of linear stochastic discrete multivariable non-stationary system; a computational process called MOESP_AOKI_VAR is proposed and implemented to achieve this. The proposed algorithm is based on the subspace methods: Multivariable Output-Error State Space (MOESP), used for computational modelling of systems and on an AOKI algorithm developed by Masanao Aoki, for computational modelling of time series that we call the Aoki algorithm.

Johanna B. Tobar, Celso P. Bottura

Chapter 4. Comparative the Performance of Control Charts Based on Copulas

Control chartsControl charts are quality control procedures to detect a change of manufacturing product. They are used both univariate and multivariate quality characteristics which are random, independent and identically distributed (i.i.d.). CopulasCopulas approach can be applied to dependencies and non-normal variables with marginal distributions when the joint distribution could not be found in multivariate cases except normal distribution process. Therefore, the objective of this work is to compare three control charts from three copulas, when observations are exponential distribution via Average Run Length (ARLAverage Run Length (ARL)). For specifying dependence between random variables are proposed and measured by Kendall’s tau through Monte Carlo simulationMonte carlo simulations. The numerical results show that the Normal copula of Multivariate Exponentially Weighted Moving Average (MEWMAMultivariate exponentially weighted moving average (MEWMA)) control chart is inferior to other control charts for all magnitudes of shifts.

Sasigarn Kuvattana, Saowanit Sukparungsee

Chapter 5. Mamdani Fuzzy Decision Model for GIS-Based Landslide Hazard Mapping

Small-scale MiningSmall-scale mining (SSM) has been a part of the industry of Surigao del NorteSurigao del Norte, which is potential for landslide hazards for the community particularly to small-scale miners. Designing a map and assessing landslide hazard in SSM areas are the necessity. Thus, this study employed a fuzzy decision model (FDM) using Mamdani-type runs with multiple controllers to assess landslide hazard using eight causative landslide factors with the integration of Geographic Information SystemGeographic information system (GIS) for mapping. The results identified landslide hazardLandslide hazard zones show active landslide areas vulnerable to very high-risk landslide catastrophes. Hence, the model can predict and map areas vulnerable to landslide disaster.

Monalee A. dela Cerna, Elmer A. Maravillas

Chapter 6. Laser Scanning as a Tool for the Analysis of Historical Buildings

The paper contains an analysis of the application of 3D scanning in the process of creating a very precise numerical model of historical buildingsHistoric building. Documentation of historical structures is very important for conservation of cultural heritage. In comparison with the other documentation methods, photogrammetry is faster and more precise. The article presents both the technology of measurement and stages for implementing specific parts of a project with the use of 3D laser scanning and also the possibility of using data.

Jerzy Szolomicki

Chapter 7. A Simulation Study on Performance Validation of Indoor Location Estimation Based on the Radial Positive Distribution

We study the problem of analyzing indoor location estimation by statistical radial distribution model. In this study, we suppose the observed distance data between transmitter and receiver as a statistical radial distribution. The proposed method is based on the marginal likelihoods of radial distribution generated by positive distribution among the several transmitter radio sites placed in the room. In this paper we compare the performance of radial Weibull distribution and radial log-normal distribution for the indoor location estimation. To compare the performance of each distribution based approach, we conducted simulation study of location estimation. As a result, radial Weibull distribution based method shows high accuracy for the indoor spatial location estimation.

Kosuke Okusa, Toshinari Kamakura

Chapter 8. Semi-supervised Classification with Modified Kernel Partial Least Squares

The aim of this paper is to present a new semi-supervised classification method based on modified Partial Least Squares algorithm and Gaussian Mixture Models. Combining the information contained in unlabeled samples together with the available training labeled samples can increase the classification performance. Our method relies on combining two kernel functions: the standard kernel calculated on data from labeled samples and a generative kernel directly learned by clustering the data. The economical datasets are used to compare the performance of the classification.

Paweł Błaszczyk

Chapter 9. 2D-DOAEDOAE Improvement Using ESPRITESPRIT with Doppler Correction

In this chapter we focus on reducing target maneuverTarget maneuver effect on 2D-DOAE2D-DOAE process side by side with reducing the computational cost. Firstly, temporal subspace approach in ESPRITESPRIT method (T-ESPRITT-ESPRIT) is introduced to decrease errors caused by the model nonlinearity effect, and to reduce the computational load. Second, multiple spatial subspaces in ESPRITESPRIT method (S-ESPRITESPRIT) is applied in order to enable multiple phase deference measurements with different resolutions. Additionally, the T-ESPRITT-ESPRIT is combined with the S-ESPRITESPRIT using time difference of arrival (TDOATDOA) method in order to form TS-ESPRITTS-ESPRIT method. Third, new method to estimate Doppler shift is introduced using subspace concept. Finally, the effect of Doppler frequencyDoppler frequency on the TS-ESPRITTS-ESPRIT method is derived in order to curb target maneuverTarget maneuver effect on the DOAEDOAE process, which consequently achieves high performance at low SNR with low computational complexity.

Youssef Fayad, Caiyun Wang, Qunsheng Cao

Chapter 10. Pathologies Segmentation in Eye Fundus Images Based on Frequency Domain Filters

This chapter presents a novel method based on discrete frequency transforms to segment various pathologies in eye fundus color images such as exudates, blood vessels, and aneurysms. Non-uniform illuminated eye fundus images are corrected by applying a homomorphic high-pass frequency filter. Then, a super-Gaussian band-pass filter defined in the frequency transform domain is used to distinguish between background and foreground objects. The filtering step works with the green channel that usually contains the most relevant information to segment different pathologies. Specifically, exudates detection after transform inversion of the filtered image requires a gamma correction to enhance foreground objects. Otsu’s thresholding method is applied to the enhanced image and masked over the effective area to get the segmented exudates. For blood vessels and aneurysms, back in the spatial domain, the negative of the filtered image is required. Then a median filter is applied to reduce noise or artifacts followed by gamma contrast enhancement. Again, Otsu’s thresholding method is used for image binarization. Next a morphological closing operation is applied and masking the effective image area gives the segmented blood vessels or aneurysms. Illustrative examples using retinographies from a free public domain clinical database are included to demonstrate the capability of the frequency filter approach.

Gonzalo Urcid, Luis David Lara-R, Elizabeth López-M

Chapter 11. The Distribution of HIV/AIDS Commodities: NPOS and Sustainability

The diagnosis HIV infection in Zimbabwe the epidemic has grown to about millions people infected of the country’s total 12 million population. An estimated 34 % of sexually active adults aged 15–49 years are infected with HIV. It is estimated that about 600,000 people have full-blown AIDS. More than 3,800 people are dying every week due to this epidemic, which has become the top killer disease in the country. The research focused on the distribution mechanisms of the supply chains so as to strengthen the much needed service delivery and to also try and make sure that goods arrive where they are needed and in time. Effective and efficient Humanitarian aid distribution are recommended in this study.

Tatenda Talent Chingono, Sebonkile Cynthia Thaba, Charles Mbohwa

Chapter 12. Representation of a DNA Sequence by a Substring of Its Genetic Information

Owing to the technological advancements of recent years, biologists are now able to extract, examine and store the genetic information of living beings. As a consequence, databases’ sizes grew exponentially containing a large amount of redundant or poorly analyzed information. This increase in size represented a great challenge for data storage. As it became more difficult to properly analyze, rank well, and save all the data. As a solution to this problem, we propose, through this article, an algorithm for determining the optimal local alignment and by which we offer the possibility of representing a DNA sequence by a substring of its genetic information and therefore reduce the amount of data banks information.

Bacem Saada, Jing Zhang

Chapter 13. Fatal Flaws in Norwich’s “Mystery of Loudness Adaptation”

Sixty-years-worth of publications show that there is no loudness change of a tone given to only one ear. But when, additionally, a same-frequency tone is presented intermittently to the other ear (“Simultaneous Dichotic Loudness Balance”), its amplitude must be progressively lowered in order to have equal loudnesses at the two ears. Why? Professor K.H. Norwich calls this the “mystery of loudness adaptation”, and claims to solve it (Bull Math Biol 2010), through “mathematical exploration” of monaural (single-ear) tone presentation versus binaural (two-ears) tone presentation. Norwich’s model is carefully scrutinized here. It proves to be riddled with arbitrary claims, contradictions, and incongruities. There is an alternative, a model by Nizami which explains ten known idiosyncracies of Simultaneous Dichotic Loudness Balance.

Lance Nizami

Chapter 14. Wrong Limits and Wrong Assumptions: Kenny Norwich and Willy Wong Fail to Derive Equal-Loudness Contours

Using the Entropy Equation from their “Entropy Theory of Perception”, Norwich and Wong modelled loudness L versus intensity I using four unknowns. Norwich and Wong then quantified the Weber fraction by taking the differential with respect to intensity, and then replacing differentials by deltas. Subsequent re-defining of terms produced a Weber fraction equation resembling that of the late Professor Riesz. Dr. Riesz’s own equation had three parameters, which Norwich and Wong (using re-defined terms) substituted into their equation for L. They then equated the theoretical loudnesses of a comparison tone and a reference tone, generating theoretical equal-loudness contours – a previously unaccomplished feat in the literature – after also replacing one of Riesz’s parameters, initially identified with an exponent of the Entropy Equation, by Stevens’ exponent “x”. But the Norwich and Wong derivation contains fatal flaws. First, the loudness equation L cannot deal with the respective intensity limits of threshold intensity and zero intensity. Then, there are unproven relations between various parameters. To examine those relations, the present author inferred 37 values of each of x, n, and a third parameter, by the only method available: curvefitting of the Entropy Equation and of Stevens’ Law to 37 loudness-growth plots. Results clearly show that the third parameter is not constant with tone frequency, but that it does vary with maximum loudness, in a relation not noted by Norwich and Wong. And n does not equal x. In sum, Norwich and Wong fail to derive equal-loudness contours. Their mistakes exemplify what happens in mathematical biology under inappropriate limits and untested assumptions.

Lance Nizami

Chapter 15. High Assurance Asynchronous Messaging Methods

Asynchronous messaging is the delivery of a message without waiting for the intended recipient to respond or acknowledge the message. This solution works well for distributed systems communication, in which different systems may or may not be available at the same time. Asynchronous messaging solutions often use a message queue that holds messages to be picked up by the recipient. Although communication with the queue can be secured using lower layer protocols, such as Transport Layer Security (TLS), this does not provide the high assurance end-to-end security for the sender and receiver. The queuing system acts as a man-in-the-middle, negating authentication, integrity, and confidentiality guarantees. End-to-end security for asynchronous messaging must be provided by the asynchronous messaging layer itself. This paper discusses high assurance issues, current asynchronous messaging models and proposes methods for providing end-to-end asynchronous messaging security in a high assurance environment.

William R. Simpson, Kevin Foltz

Chapter 16. Regional Differences in Iwate Prefecture Road Usage Recovery Following the 2011 Tohoku Earthquake

In a previous study, we calculated the usable road distance in the Southern Coastal area of the Iwate Prefecture following the 2011 Tohoku Earthquake. In this context, a usable road is one on which at least one vehicle was detected during the observation period. If the cumulative usable distance percentage up to September 30, 2011 is considered to be 100 %, then the percentage of the usable road distance determined by April 8, 2011 and April 29, 2011 was 80 % and 90 %, respectively. In this study, we calculated the regional differences in the road recovery in the Iwate Prefecture following the 2011 Tohoku Earthquake. Therefore, we divided the Iwate Prefecture into four areas, i.e., the Northern Inland, Southern Inland, Northern Coastal, and Southern Coastal areas. The primary results of our study are as follows. First, we determined that for both the northern and Southern Inland areas, 80 % of the road distance was usable by April 15, 2011 and 90 % was usable by May 27, 2011, which indicates that the recovery speed in both the northern and Southern Inland areas have been slightly slower than that in the Southern Coastal area. Second, we found that for the Northern Coastal area, 80 % of the road distance was usable by April 29, 2011 and 90 % was usable by June 24, 2011, which implies that the recovery speed in the Northern Coastal area has been extremely slower than that in the Southern Coastal area. We assume that the recovery difference between the northern and Southern Coastal areas has been caused by differences in their geographical features.

Noriaki Endo, Hayato Komori

Chapter 17. Computational Complexity Study of RSA and Elgamal Algorithms

The Elgamal and RSA algorithms are two of the cryptographic techniques that are actively in use for securing data confidentiality and authentication. The energy usage analysis of the two algorithms has been investigated and it was established that RSA is more energy efficient than Elgamal. The goal of this study is to carry out computational speeds analysis of the two algorithms. The methodology employed involves implementation of the algorithms using same programming language, programming style and skill, and programming environment. The implementation is tested with text data of varying sizes. The results obtained reveal that holistically RSA is superior to Elgamal in terms of computational speeds; however, the study concludes that a hybrid algorithm of both the RSA and Elgamal algorithms would most likely outperform either the RSA or Elgamal. It is therefore recommended that efforts at designing a new algorithm from the study of these two algorithms should be considered.

A. E. Okeyinka

Chapter 18. Ontology-Based Context Modeling and Reasoning for a Smart Space

The recent development on pervasive computing, sensors network and smart appliances has motivated the appearance of a new technological domain called smart spaces. Such spaces are defined as a physical space rich in equipment and software services that is capable of interacting with people in order to provide intelligent services to the user for improved comfort (quality of life), energy saving, security, and tremendous benefits for an elderly person living alone. Providing intelligent services requires the equipment to be sensitive to the context of use or context-awareness. The concept of context is a key enabling factor in such environment and understand it, establishing its components and modeling it are basic and important steps for the development of smart spaces. Previous works in such environments were unable to deal efficiently with context-awareness. In this paper we present both an ontology-based context modeling approach and context reasoning for a smart living room.

Moeiz Miraoui

Chapter 19. Performance Analysis of Heed Over Leach and Pegasis in Wireless Sensor Networks

Wireless sensor networks are an active research domain in sensing and communication. Routing data in the wireless sensor network endure from several challenges viz., reliable transmission scalability, packet loss, energy constraints etc. This paper simulates and analyse three hierarchical routing protocols like LEACH, PEGASIS and HEED on the basis of critical factors like end-to-end delay, energy of the network nodes, packet delivery ratio (PDR) of data and level of throughput. We conduct simulation of these routing protocols for certain parameters and analyse the results to get a better idea about suitability of protocols corresponding to requirements. We use the remaining amount of energy in the node as criteria for selecting the cluster heads. LEACH, PEGASIS and HEED routing algorithms are compared using MATLAB simulation and the results and analysis are based upon the simulation outputs and results. The results demonstrate that HEED is impressive as it increases the network lifetime and also overcomes the drawbacks of both LEACH and PEGASIS.

Gaurav Kumar Nigam, Chetna Dabas

Chapter 20. Spectral Properties and Error Rate Performance of Digital Chirp Communication System

In the first part, new and easy-to-compute closed-form expressions for average symbol error probability of digital M-ary chirp communication system impaired by additive white Gaussian noise and fading are derived. Three fading environments, Rayleigh, Nakagami-m, and generalized-K, that represent most practical wireless channels are considered. The closed-form expressions derived are then used to illustrate the performances of 2-, 4-, and 8-ary chirp systems as a function of average received signal-to-noise ratio (SNR), modulation and fading environment parameters. The proposed mathematical analysis can be easily used to design an efficient and reliable M-ary chirp communication system for application over fading channels. In addition, A general method is introduced for the calculation of power spectra of M-ary chirp signals. This method can handle arbitrary M-ary data and works for an arbitrary set (h, w) of modulation parameters; h is the modulation index and w is the sweep width parameter. Numerical results for the power spectrum of the M-ary chirp signals are presented and are illustrated as a function of set of modulation parameters. It is shown that the M-ary chirp system offers advantages over other conventional modulation techniques, making it an attractive wide-band modulation technique from the point of view of the power and the bandwidth trade-off.

Mohammad Alsharef, Abdulbaset M. Hamed, Raveendra K. Rao

Chapter 21. Power Factor Control Mechanism for Optimum Efficiency in Wind Generators and Industrial Applications

Power factor control mechanism play critical and important role to determine the ability and effectiveness of the operation of doubly-fed induction machines in wind generators and industrial applications. This work presents a vector control (VC) of an emerging brushless doubly-fed reluctance (BDFRG) technology for large wind turbine and industrial applications. The BDFRG has been receiving increasing attention due to its low operation and maintenance costs afforded by the use of partially-rated power electronics, and the high reliability of brushless assembly, while offering performance competitive to its traditional slip-ring counterpart, the doubly-fed induction generator (DFIG). A robust VC strategy has been developed for a custom-designed BDFRG fed from a conventional ‘back-to-back’ IGBT converter. Simulation studies and experimental verification has validated the algorithm under optimum power factor control (OPFC) conditions which allow the improved efficiency of the generator-converter set and the entire wind energy conversion systems (WECS).

Jude K. Obichere, Milutin Jovanovic, Sul Ademi

Chapter 22. TBSC Compensator

This work deals with the analysis, design and implementation of Thyristor Binary Switched Capacitor (TBSC) banks. The performance of various TBSC topologies for reactive power compensation suitable for fast dynamic loads in closed loop systems are investigated by simulation. The scheme consists of Thyristor Binary Switched Capacitor (TBSC) banks. TBSC is based on a chain of Thyristor Switched Capacitor (TSC) banks arranged in binary sequential manner. Frequent switching of capacitors results the life of switched capacitor bank. Hence, a control circuitry has been proposed in such a way that transient free switching of TBSCs will takes place. Proposed topology allows almost step-less reactive power compensation for fast varying dynamic loads in closed loop. This scheme has error in reactive power compensation equals to half of the lowest step size of the capacitor bank. The suitable chain of binary switched capacitor bank will be proposed. The proposed scheme can achieve reactive power compensation cycle to cycle basis and the harmonics contents of source are maintained at insignificant levels due to filtering action of TBSC as well as transient free switching of capacitor bank. Proposed TBSC scheme compensates the fast varying dynamic reactive load. Also the proposed scheme can be used for direct online starting of I.M.s with voltage sag mitigation at starting, which helps improving stability of the system and Power Factor (P.F.) improvement in steady state.

Swapnil Patil, Khushal Shende, Dadgonda Patil, Anwar Mulla

Chapter 23. Sensorless Speed Estimation by Using Rotor Slot Harmonics for Squirrel Cage Induction Motor

Speed sensors have been used to obtain the motor speed. The sensors generally generate square-wave output signal. The signals are analyzed by some techniques as long measurement, pulse counting or spectral analysis. In addition to these, sensorless speed estimation methods has been focused on to estimation of the motor speed in the recent years. The method is based on motor current signal analysisMotor current signal analysis. Frequency spectrum of the current has components representing rotor slot harmonics. If the number of rotor slots is known, motor speed is estimated by using these frequency components. In the literature, individual algorithms have been used to calculate the speed from the rotor slot harmonics. Unlike the literature, this paper presents a method with genetic algorithm to extract the motor speed from rotor slot harmonics components.

Hayri Arabaci

Chapter 24. Selective Harmonic Elimination Technique Based on Genetic Algorithm for Multilevel Inverters

Multilevel invertersMultilevel inverters have been used in medium or high voltage levels of the power applications to get low total harmonic distortionTotal harmonic distortion (THD). Several techniques have been emerged to decrease the THD of the output voltage in multilevel inverter. One of these techniques is Selective Harmonic EliminationSelective Harmonic Elimination (SHE) that has an extensive research area in the field of power electronics. In addition, it is an alternative to usual PWM techniques and includes nonlinear equations of stepped voltage waveform. In this paper, conventional multilevel inverter structures have been analyzed and it is concluded that cascaded H-bridge topology needs lower device component. According to this implication a multilevel inverter topology with reduced number of switchesReduced number of switches which resembles the cascaded H-bridge topology and enables a reduction in the system cost has been proposed, and solution of SHE equations have been optimized using Genetic AlgorithmGenetic Algorithm (GA). The results of analysis and simulation obtained have evidently proved that proposed GA based SHE technique eliminates desired harmonic order.

Hulusi Karaca, Enes Bektas

Chapter 25. Proposal of a Modulated Extended Cumulative Exposure Model for the Step-Up Voltage Test

The extended cumulative exposure model (ECEM) includes features of the cumulative exposure model (CEM) and the memoryless model (MM). These often used to express the failure probability model in step-stress accelerated life test (SSALT). The CEM is widely accepted in reliability fields because accumulation of fatigue is considered to be reasonable. The MM is also used in electrical engineering because accumulation of fatigue is not observed in some cases. The ECEM includes features of both models. In this paper, we propose a modulated ECEM model based on the time-scale. We show the estimability of the parameters using simulation study. In addition, we apply the proposed model to a step-stress test result as an experimental case.

Takenori Sakumura, Toshinari Kamakura

Chapter 26. Gesture Recognition Sensor: Development of a Tool with Playful Applications to Evaluate the PhysicalPhysical Skills and Cognitive SkillsCognitive Skills of Children Through the Use of BootstrapBootstrap Algorithm

This article is based on an innovative tool aimed at both recreational games for interactive children and children with attention deficit early, with gesture recognition technology it aims games. The implementation of different games in which there are a variety of activities to make the child learn through play, teaching aid for children’s learning and progress in activities such as coordination, laterality and motor skills and physical inactivity in the most important for the development age, are treated. Finally, the results applied to children is 6 to 8 years old which in their daily activities have little attention and concentration difficult when doing their activities and based on an algorithm of bootstrap results are given to quantify that percentage helps children playful games.

Anthony Bryan Freire Conrado, Vinicio David Pazmiño Moya, Danni De La Cruz, Johanna Tobar, Paúl Mejía

Chapter 27. Design of Learning Ontology and Service Model Through Achievement Standards Profile Modeling

In Korea, national education curriculum has prepared achievement standards of all subjects for K-12 schools since 2009. Achievement standards provide practical guidelines about what has to be taught and assessed by teachers and what has to be studied and achieved by students. The educational jurisdictions, however, provide achievement standards as unstructured and textual documents, which are inappropriate and inefficient for selective searching, sharing, and usage of achievement statements for education. In this paper, we design an ontological semantic model of curriculum, achievement standards, and syllabus. Also, we define mapping rules to formalize the semantic model to RDF/OWL specification, which is based of linked open data. Our proposed semantic model supports the statements searching and browsing, sharing, modification history tracing, learning resource linking, and mapping and integration of heterogeneous standards.

Hyunsook Chung, Jeongmin Kim

Chapter 28. Web Retrieval with Query Expansion: A Parallel Retrieving Technique

Most people consider that the World Wide Web (WWW) is a mine of information. The explosive growth in the WWW, not only in the amount of information but also in contents of Web pages, makes traditional search engines inadequate approach to the retrieval of documents or web pages that are most relevant to user needs (degree of relevance) in a short time. To improve the information retrieval process, from both time and degree of relevance to user need, parallel genetic algorithms could be utilized. In this paper, island genetic algorithm (IGA) is utilized to achieve parallelism and speed up the web information ‎retrieval process. Four different islands with different selection methods and fitness functions are suggested to be used to improve degree of relevance. To achieve parallel behavior, the four islands are executed independently on different ‎servers. Query expansion technique is used to add useful words to user query and enhance number of retrieved documents. ‎Finally, the results obtained by the four islands are combined and passed to a decision making phase to select the documents most pertinent ‎to user needs. Cosine similarity measure is used to evaluate the performance of the proposed ‎technique.‎

Noha Mezyan, Venus W. Samawi

Chapter 29. Enhanced Action Based Trust Model: A Multi-facet Model for Social Media

User activities traceability and linkability through social media status updates and other information shared on Online Social Network (OSN) such as Facebook jeopardize the confidentiality and integrity security principles and subsequently the decrease of technology trustworthiness. In this paper, the need for a multi-facet trust management system on OSN is discussed. An enhanced action based trust model which integrates action based trust and multi-facet model with ranked trust attributes is presented. Overall, the enhanced trust model is able to support user to relate and understand the effect of privacy and confidentiality values of oneself when being personal information is being exposed in social media sites.

Manmeet Mahinderjit Singh, Yi Chin Teo

Chapter 30. A Motorized Akpu Milling Machine Design for Self Employment and Wealth Generation in Nigeria

This research work was carried out in order to develop a motorized Akpu Milling Machine in the quest to improve poverty eradication in Nigeria through technological innovation. Considering the various traditional method of grinding Akpu (Cassava). This machine was fabricated and has the ability to grind tubers of Akpu (Cassava) squeezing/extracting the moisture content out before frying on the fire, or oven. This research work aimed at eliminating the problem of traditional method of milling Akpu. The capacity of the Akpu milling machine developed is 158 kg/h. The machine runs on a single phase three horse power electric motor at a speed of 1450 rpm. Due to the inherent problems of contamination the machine may be adopted for large scale industrial applications.

Gbasouzor Austin Ikechukwu, Enemuoh Chioma Lorreta

Chapter 31. Response Behavior Model for Process Deviations in Cyber-Physical Production Systems

Cyber-physical production systems are highly flexible systems that enable adaptive production processes. In these systems, all participants of the production process possess individual information about themselves and are equipped with sensors, actors, and communication interfaces. They can interact with each other and autonomously develop and execute process relevant decisions. For each component, a standard process sequence and alternative process sequences are defined. If a deviation in the standard process occurs during the production of an individual component, the process participants can interact with each other and autonomously define an appropriate response strategy and execute it by using actors of the participants and the intralogistics. Regardless of the deviation, the manufacturing and assembly process of individual components in cyber-physical production systems can still proceed. For this purpose, process deviations and the response behavior of cyber-physical production systems are analyzed, modeled, and simulated, to illustrate the benefits of cyber-physical production systems and to develop a process deviation management system for actual, physical production systems based on cyber-physical systems.

Nadia Galaske, Daniel Strang, Reiner Anderl

Chapter 32. Complete Bid Configuration. Supporting the Process from the Perspective of the General Contractor

This paper is a summary of work on the construction of the optimization model for decision-makers implementing projects in which the issue is the involvement of external contractors. Using the results of our current works on optimization approaches to the problem of contractors selection with the use of incentive mechanisms, we propose an outline of a comprehensive decision support system (DSS). The practical aim is to enable support for both project owners and contractors in the decision of any sub-contractors at the lower levels of the Work Breakdown Structure (WBS).

Tomasz Błaszczyk, Paweł Błaszczyk

Chapter 33. HIV/AIDS Humanitarian Supply Chain Management: The Case of Zimbabwe

Supply Chain Management systems are not yet systematic and clearly defined. They are constantly being interrupted hence supply of antiretroviral is a major challenge to the nation. This also tends to enhance treatment failure and development of ARV resistance, hence the need strengthen the current supply chain and logistics by also reducing the risk of stock outs and also strengthening the capacity of the government of Zimbabwe to absorb the huge quantities of HIV/AIDS commodities from the donors across the world. The Research mainly used primary data collection methods including a survey and interviews. Secondary data from the literature review, also helped complement the gaps in primary data. Solutions and recommendations were then identified in order to attend to the identified deficiencies, problems and challenges.

Talent Tatenda Chingono, Charles Mbohwa

Chapter 34. Techno-Econometric Analysis of Membrane Technology for Biogas Upgrading

Parametric study was conducted to investigate the effect of variation of process conditions for single stage without recycle (SSWR) and double stage with permeate recycle (DSPR) on product purity, CH4 recovery and compression power requirement. In the study, achieving high CH4 recovery and product purity simultaneously could not be attained in SSWR configuration. The performance of DSPR yielded a better result but with higher membrane area and compression power. In the economic studies, increase in CO2 and feed pressure increased the gas processing cost (GPC) while increase in flow rate decrease GPC based on economies of scale. At optimized condition, the calculated GPC was $0.46/m3 of biomethaneBiomethane. The NPV, IRR and BCR for producing biomethane were R15,240,343, 22.41 % and 2.05 respectively with a break-even in the 5th year. Using CBG over gasoline, the end user saves 34 % of annual fuel cost.

Samson O. Masebinu, Esther Akinlabi, Edison Muzenda, Charles Mbohwa, Akinwale O. Aboyade

Chapter 35. Validation of a Novel Method for Ethyl Lactate Separation Using Langmuir Hinshelwood Model

In this work, the batch wise esterificationEsterification of lactic acidLactic acid and ethanolEthanol was carried out using different cation-exchange resinsCation-exchange resins including amberlyst 15, amberlyst 16, dowex 50W8x and amberlyst 36 as catalysts. The Scanning electron microscopy/energy dispersive analysis of X-ray (SEM/EDAX) was used to examine the surface morphology of the resin catalysts before and after the esterificationEsterification process. The SEM of the commercial resin catalysts showed a clear surface with no crack before esterificationEsterification. The order of the surface integrity from the SEM morphology of the resin catalysts after the esterificationEsterification process was amberlyst 36 > dowex 50W8x. Gas chromatography with mass spectrometry (GC-MS) detector was used to analyse the esterificationEsterification products. Ion 45 on the mass spectra confirmed that the esterificationEsterification product was in accordance with the library mass spectra of the commercial ethyl lactateEthyl lactate solvent. FTIR was used to determine the most adsorbed components on the surface of the catalysts. The rationale of the adsorption strength of the resin catalyst was described and validated using two mechanisms of Langmuir Hinshelwood model.

Okon Edidiong, Shehu Habiba, Gobina Edward

Chapter 36. Potential Impacts of Leachate From Waste Mixed With CCA-Treated Wood Destined to Landfills

The best option considered in waste management as against landfilling is recycling. However, landfilling is predominantly the method of waste disposal in South Africa and in developing countries. South Africa disposes 41,000 tons of solid waste daily which includes chunks of construction and demolition (C&D) debris, municipal solid waste (MSW) and wood waste. Leachate generation and percolation is expected in these landfills as rain and/or runoffs enter the waste body. Arsenic, copper and chromium leach from Chromated Copper Arsenate (CCA)-treated wood when in contact with water. As such, laboratory tests using a bespoke device were carried out to explore the environmental risk of different disposal situations in monofills and/or open dumps. The study therefore, presents the results for the wood waste and revealed that CCA-treated wood formed hazardous concentration levels of chromium and arsenic which if not properly contained in real life scenarios, could have consequential contamination impacts on human and environment health.

Emmanuel Emem-Obong Agbenyeku, Edison Muzenda, Innocent Mandla Msibi

Chapter 37. The Use of Mathematical Modeling for the Development of a Low Cost Fuzzy Gain Schedule Neutralization Control System

This work has focused on the development of a Low-Cost Fuzzy Gain Schedule Neutralization Control System. The implementation of a typical Neutralization System, with an appropriate monitoring, control and data acquisition has been successfully implemented, as well as the controller. For the Fuzzy Controller project, the system dynamics has been identified and the empirical models parameters were calculated. As inputs it has been used the Auxiliary Variable, defined with the linguist terms as Acid, Neutral and Alkaline by three trapezoidal membership functions, as well as the control error and the change in the control error, both defined by five triangular membership functions. The controller outputs were defined for the Acid and Alkali pumps by 18 triangular membership functions and it was defined a set of 50 fuzzy rules. The development of the control system considered in this paper reveals an attractive industrial application, representing an application for water consumption reduction in industry.

Rodrigo Sislian, Flávio Vasconcelos da Silva, Rubens Gedraite, Heikki Jokinen, Dhanesh Kattipparambil Rajan

Chapter 38. Recovery of VOC from Offshore and Onshore Shuttle Tankers Using Structured Y-Type Zeolite Membranes

The emissions of volatile organic compounds (VOCs) from onshore and offshore facilities are studied and an alternative technology for the recovery of methane and propane by the use of membrane technology is explored. Permeation tests were carried out with a zeolite membrane consisting of an α—Al2O3 support. The permeance of nitrogen, carbon dioxide, helium, methane and propane through the membrane at varying pressures was determined. The permeance of CH4 was in the range of 1.44 × 10−6–3.41 × 10−6 mols−1m−2Pa−1 and a CH4/C3H8 selectivity of 3.3 at 293 K was obtained. The molar flux of the gases was found to have an average linear regression coefficient value R2 of 0.9892. On the basis of the results obtained it can be concluded that separation of the hydrocarbon gases can be achieved with the zeolite membrane. The main mechanism governing the flow of gases through the zeolite membrane was molecular sieving although there is evidence of deviation from this mechanism. To achieve higher selectivity of the target gas there is need for further modification of the membrane. The morphology of the membrane was determined using the scanning electron microscope, which showed the pore size of the membrane and the support layer.

Shehu Habiba, Adebayo Ajayi, Okon Edidiong, Edward Gobina

Chapter 39. Effect of Operating Pressure on Fischer-Tropsch Synthesis Kinetics over Titania-Supported Cobalt Catalyst

The kineticsKinetics for Fischer-TropschFischer-Tropsch (FT) reaction over a titania-supported cobaltCobaltcatalystCatalyst have been measured at 220 °C using a constant space velocity of 20 ml/gCat/min and total operating pressureOperating pressures of 1, 10 and 20 bar respectively. The catalystCatalyst was prepared by incipient wetness impregnation method and the evaluation was carried out in a fixed-bed reactor. The CO conversionCO conversion increased with an increase in pressure from 1 to 10 bar and decreased when the pressure was further increased to 20 bar. The methane selectivityMethane selectivity decreased and the selectivity toward C5+ hydrocarbons increased with increases of the total operating pressureOperating pressure from 1 to 10 and 20 bar. However, operating at 10 bar yielded the highest rate of C5+ hydrocarbons production.

Kalala Jalama

Chapter 40. Characterization of Gamma-Alumina Ceramic Membrane

This paper presents the experimental results of different methods of characterization carried out on a gamma alumina ceramic membrane. A commercial gamma alumina mesoporous membrane has been used. The pore size, specific surface area and pore size distribution values were calculated with the use of a nitrogen adsorption-desorption instrument. The images from the scanning electron microscopy (SEM) showed the membranes’ morphological structure. Gas permeation tests were carried out on the membrane using a variety of single and mixed gases at a temperature range between 25–200 °C and gauge pressure range of 0.05–1 bar. Graphs of flow rate versus temperature were obtained. The results were therefore used to explain the effect of temperature on the flow rate of the various gases. At a pressure drop of 0.5 bar for example, the flow rate for N2 was relatively constant up to 150 °C before decreasing with further increase in temperature, while for O2, it continuously decreased with an increase in temperature.

Ifeyinwa Orakwe, Ngozi Nwogu, Edward Gobina

Chapter 41. Investigation of Flue Gas and Natural Gas Separation Using Silica Composite Membranes Formed on Porous Alumina Support

The overall goal of this paper is to foster the development of new membrane technologies to improve manufacturing efficiency and reduce CO2 emissions. Hence, a silica composite ceramic membrane with extremely low defect concentrations has been prepared through three successive dip-coating steps in a silica solution. An asymmetric structure was obtained by the deposition of silica layer on top of a combination of titanium and α-Al2O3 support. The morphology of the three step homogenous silica layer was analysed by scanning electron microscope and Energy dispersive x-ray analyzer. The transport property of the membranes was carried out at room temperature and at pressure differences ranging from 1 to 2 bar. The fabricated membrane has reproducible high permeance for CO2. Interestingly, an almost equal flow rate was observed for CH4 and N2 at a pressure of 2 bar. Separation factors obtained from CO2/CH4 and CO2/N2 are comparatively higher than Knudsen separation values.

Ngozi Nwogu, Ifeyinwa Orakwe, Edward Gobina


Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Grundlagen zu 3D-Druck, Produktionssystemen und Lean Production

Lesen Sie in diesem ausgewählten Buchkapitel alles über den 3D-Druck im Hinblick auf Begriffe, Funktionsweise, Anwendungsbereiche sowie Nutzen und Grenzen additiver Fertigungsverfahren. Eigenschaften eines schlanken Produktionssystems sowie der Aspekt der „Schlankheit“ werden ebenso beleuchtet wie die Prinzipien und Methoden der Lean Production.
Jetzt gratis downloaden!


Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.