Skip to main content
main-content

Über dieses Buch

This book provides a technical approach to a Business Resilience System with its Risk Atom and Processing Data Point based on fuzzy logic and cloud computation in real time. Its purpose and objectives define a clear set of expectations for Organizations and Enterprises so their network system and supply chain are totally resilient and protected against cyber-attacks, manmade threats, and natural disasters. These enterprises include financial, organizational, homeland security, and supply chain operations with multi-point manufacturing across the world. Market shares and marketing advantages are expected to result from the implementation of the system. The collected information and defined objectives form the basis to monitor and analyze the data through cloud computation, and will guarantee the success of their survivability's against any unexpected threats. This book will be useful for advanced undergraduate and graduate students in the field of computer engineering, engineers that work for manufacturing companies, business analysts in retail and e-Commerce, and those working in the defense industry, Information Security, and Information Technology.

Inhaltsverzeichnis

Frontmatter

Chapter 1. Resilience and Resilience System

Resilience thinking is inevitably systems thinking, at least as much as sustainable development is. In fact, “when considering systems of humans and nature (social-ecological systems) it is important to consider the system as a whole.” The term “resilience” originated in the 1970s in the field of ecology from the research of C.S. Holling, who defined resilience as “a measure of the persistence of systems and of their ability to absorb change and disturbance and still maintain the same relationships between populations or state variables.” In short, resilience is best defined as “the ability of a system to absorb disturbances and still retain its basic function and structure.”
Bahman Zohuri, Masoud Moghaddam

Chapter 2. Building Intelligent Models from Data Mining

In order for Business Resilience System to function and induced advanced warning for proper action, the BRS Risk Atom and, in particular, the fourth orbit have to stay in stable status, by assessing the risk elements and responses. Thus, looking at risk assessment and understanding it are an essential fact, and it is inevitable; therefore, we need to build an intelligent model to invoke information from variety of data available to us at more than terabyte from around the globe we are living on. These data need to be processed by Process Data Point in the core of Risk Atom, either real time or manually. The feed point for PDP is structured on fuzzy or Boolean logic as suggested by us authors in this book. This chapter under lays foundation for Risk Atom by discussing the risk assessment and goes through process of building intelligent models, along with data mining and expert knowledge and a look at some fundamental principles that can interact with the Risk Atom.
Bahman Zohuri, Masoud Moghaddam

Chapter 3. Event Management and Best Practice

In order to set up and configure an efficient Business Resilience System (BRS), we need to have a deep and broad understanding about event and event management with a focus on best practice. Such practice allows us to examine event filtering, duplicate detection, correlation, notification, and synchronization. In addition, it discusses trouble-ticket integration and how the trouble ticket and as part of BRS workflow can set the triggering point at BRS dashboard and set the maintenance modes and automation concerning event management. This chapter explains the importance of event correlation and automation. It defines relevant terminology and introduces basic concepts and issues. It also discusses general planning considerations for developing and implementing a robust event management system.
Bahman Zohuri, Masoud Moghaddam

Chapter 4. Event Management Categories and Best Practices

Event management issues need to be addressed when an organization begins monitoring an IT environment for the first time, decides to implement a new set of systems management tools, or wants to rectify problems with its current implementation. Often it is the tool implementers who decide the approach to use in handling events. Where multiple tools are implemented by different administrators, inconsistent policies and procedures arise. The purpose of this chapter is to provide best practices for both the general implementation approach an organization uses to monitor its environment and the specific event management concepts defined in Chap. 2, “Introduction to Event Management.”
Bahman Zohuri, Masoud Moghaddam

Chapter 5. Dynamic and Static Content Publication Workflow

Dynamic content publishing is a method of designing publications in which layout templates are created which can contain different contents in different publications. Using this method, page designers do not work on finished pages but rather on various layout templates and pieces of content, which can then be combined to create a number of finished pages. In cases where the same content is being used in multiple layouts, the same layout is being used for several different sets of content, or both dynamic page publishing can offer significant advantages of efficiency over a traditional system of page-by-page design. This technology is often leveraged in Web-to-print solutions for corporate intranets to enable customization and ordering of printed materials, advertising automation workflows inside of advertising agencies, catalog generation solutions for retailers, and variable digital print-on-demand solutions for highly personalized one-to-one marketing. A digital printing press often prints the output from these solutions. The dynamic content publishing is a tool that can enhance Business Resilience System (BRS) to publish all the warning and events within ABS through the enterprise content management (ECM) at enterprises or organizations for stakeholder and decision-maker as well as build a database for knowledge base (KB) for its infrastructure. KB is a technology that is used to store complex structured and unstructured information used by a computer system.
Bahman Zohuri, Masoud Moghaddam

Chapter 6. What Is Boolean Logic and How It Works

If you want to understand the answer to this question down at the very core, the first thing you need to understand is something called Boolean logic. Boolean logic, originally developed by George Boole in the mid-1800s, allows quite a few unexpected things to be mapped into bits and bytes. The great thing about Boolean logic is that, once you get the hang of things, Boolean logic (or at least the parts you need in order to understand the operations of computers) is outrageously simple. In this chapter, we will first discuss simple logic “gates” and then see how to combine them into something useful. A contemporary of Charles Babbage, whom he briefly met, Boole is, these days, credited as being the “forefather of the information age.” An Englishman by birth, in 1849, he became the first professor of mathematics in Ireland new Queen’s College (now University College) Cork. He died at the age of 49 in 1846, and his work might never have had an impact on computer science without somebody like Claude Shannon, who 70 years later recognized the relevance for engineering of Boole’s symbolic logic. As a result, Boole’s thinking has become the practical foundation of digital circuit design and the theoretical grounding of the digital age.
Bahman Zohuri, Masoud Moghaddam

Chapter 7. What Is Fuzzy Logic and How It Works

The idea of fuzzy logic was first advanced by Dr. Lotfi Zadeh of the University of California at Berkeley in the 1960s. Dr. Zadeh was working on the problem of computer understanding of natural language. Natural language (like most other activities in life and indeed the universe) is not easily translated into the absolute terms of 0 and 1. (Whether everything is ultimately describable in binary terms is a philosophical question worth pursuing, but in practice much data we might want to feed a computer is in some state in between and so, frequently, are the results of computing.) It may help to see fuzzy logic as the way reasoning really works, and binary or Boolean logic is simply a special case of it.
Bahman Zohuri, Masoud Moghaddam

Chapter 8. Mathematics and Logic Behind Boolean and Fuzzy Computation

Boolean algebra (BA) was introduced by George Boole in his first book The Mathematical Analysis of Logic (1847) and set forth more fully in his An Investigation of the Laws of Thought (1854). According to Huntington, the term “Boolean algebra” was first suggested by Sheffer in 1913. Fuzzy logic is a form of many-valued logic in which the truth values of variables may be any real number between 0 and 1, considered to be “fuzzy.” By contrast, in Boolean logic, the truth values of variables may only be the “crisp” values 0 or 1. Fuzzy logic has been employed to handle the concept of partial truth, where the truth value may range between completely true and completely false. Furthermore, when linguistic variables are used, these degrees may be managed by specific (membership) functions. The term fuzzy logic was introduced with the 1965 proposal of fuzzy set theory by Lotfi Zadeh. Fuzzy logic had however been studied since the 1920s, as infinite-valued logic—notably by Lukasiewicz and Tarski. In this chapter we look at both of these logics holistically; any extensive details are beyond the scope of this book, and we encourage our readers to refer to so many books and articles that can be found on the Internet or Amazon.
Bahman Zohuri, Masoud Moghaddam

Chapter 9. Building Intelligent Models from Data Mining and Expert Knowledge

While the idea of a data warehouse remains the core ideal of most corporate IT shops, the concepts surrounding the organization and architecture and, especially, the delivery mechanisms have changed remarkably. In today’s rapid changing and highly competitive marketplace, the idea of physical centralization has given way to a virtual data warehouse tied together with message-oriented middleware and distributed through application servers, Web servers, and intelligent database systems. The overriding influence in the corporate response to its information assets has been, of course, the dramatic rise of the Internet as a knowledge-bearing framework. From the global reach of the Internet, corporations have carved out their own pieces of this universe—intranets to bind together the information needs of the enterprise, extranets to solidify and control supply chains, and B2B and B2C service nets to give even the smallest corporation an equal footing with corporate giants as well as an essentially low-cost worldwide online presence. The Internet has given corporate decision-makers and knowledge workers a vast (and sometimes seemingly infinite) access to raw data—in fact, to “raw” knowledge.
Bahman Zohuri, Masoud Moghaddam

Chapter 10. What Is Data Analysis from Data Warehousing Perspective?

Analysis of data is a process of inspecting, cleansing, transforming, and modeling data with the goal of discovering useful information; suggesting conclusions; and supporting decision-making. Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, in different business, science, and social science domains. Data is collected from a variety of sources. The requirements may be communicated by analysts to custodians of the data, such as information technology personnel within an organization. The data may also be collected from sensors in the environment, such as traffic cameras, satellites, recording devices, etc. It may also be obtained through interviews, downloads from online sources, or reading documentation. Data initially obtained must be processed or organized for analysis. For instance, these may involve placing data into rows and columns in a table format (i.e., structured data) for further analysis, such as within a spreadsheet or statistical software.
Bahman Zohuri, Masoud Moghaddam

Chapter 11. Boolean Computation Versus Fuzzy Logic Computation

Computational intelligence offers an in-depth exploration into the adaptive mechanisms that enable intelligent behavior in complex and changing environments. The main focus of this chapter is centered on the computational modeling of biological and man-made intelligent systems, encompassing swarm intelligence, fuzzy systems, artificial neutral networks, artificial immune systems, and evolutionary computation. This chapter, briefly, provides readers with a wide knowledge of computational intelligence (CI) paradigms and algorithms, inviting readers to implement and solve real-world, complex problems within the CI development framework. Man has learned much from studies of natural systems, using what has been learned to develop new algorithmic models to solve complex problems. This book presents an introduction to some of these technological paradigms, under the umbrella of computational intelligence (CI). In this context, the chapter to some degree will talk about artificial neural networks, evolutionary computation, swarm intelligence, artificial immune systems, and fuzzy systems, which are respectively models of the following natural systems: biological neural networks, evolution, swarm behavior of social organisms, natural immune systems, and human thinking processes.
Bahman Zohuri, Masoud Moghaddam

Chapter 12. Defining Threats and Critical Points for Decision-Making

Although humans have been thinking critically since the first Homo habilis picked up a stone tool, critical thinking as a process has only become one of the most valuable business skills in the last century. Many new decision-making strategies relying heavily on critical thinking career skills were created over the time period between 1950 and 1970, including CATWOE, PEST, and the Cause and Effect Analysis model. In this chapter, we will discuss all these processes that allow us to do critical thinking and decision-making, based on the threats against day-to-day operation and normal process in an organization or enterprises. The effectiveness of the leader is proportional to the effectiveness of the decisions the leader makes and the cascading impacts as decisions turn into action, both good and bad.
Bahman Zohuri, Masoud Moghaddam

Chapter 13. A Simple Model of Business Resilience System

In this, chapter we can recommend and define the scope of a simple Business Resilience System (BRS) based on a simple infrastructure that one could design. As we said this is just a simple approach to give some ideas to the readers. For a more complex system and infrastructure, one needs more sophisticated design and approach to the appropriate and applicable BRS for their organization and enterprise.
Bahman Zohuri, Masoud Moghaddam

Chapter 14. Business Resilience System Topology of Hardware and Software

Engineers endow artifact with abilities to cope with expected anomalies. The ability may make the system robust. They are, however, a designed feature, which by definition cannot make the system “resilient.” Humans at the front end (e.g., operators, maintenance people) are inherently adaptive and productive that allows them to accomplish better performances and sometimes even allows them to exhibit astonishing abilities in unexpected anomalies. However, this admirable human characteristic is a double-edged sword. Normally it works well, but sometimes it may lead to a disastrous end. Hence, a system relying on such human characteristics in an uncontrolled manner should not be called “resilient.” A system should only be called “resilient” when it is tuned in such a way that it can utilize its potential abilities, whether engineered features or acquired adaptive abilities, to the utmost extent and in a controlled manner, both in expected and unexpected situations or circumstances.
Bahman Zohuri, Masoud Moghaddam

Chapter 15. Cloud Computing-Driven Business Resilience System

Cloud computing is an emerging commercial infrastructure and Internet-based cost-efficient computing, where information can be accessed from a Web browser by customers according to their requirement. Cloud computing in a general term is defined for anything that involves delivering hosted services over the Internet. It is based on the concept of shared computational, storage, network, and application resources provided by a third party. Knowledge is power, thus learning from experience is a fundamental way that helps individuals or organizations to improve and avoid previous mistakes.
Bahman Zohuri, Masoud Moghaddam

Chapter 16. A General Business Resilience System Infrastructure

Knowledge is power, thus learning from experience is a fundamental way that helps individuals or organizations to improve and avoid previous mistakes. Accident Investigations (AI) and Operational Safety Reviews (OSR) are valuable for evaluating technical issues, safety management systems, and human performance and environmental conditions to prevent accidents, through a process of continuous organizational learning.
Bahman Zohuri, Masoud Moghaddam

Backmatter

Weitere Informationen

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Globales Erdungssystem in urbanen Kabelnetzen

Bedingt durch die Altersstruktur vieler Kabelverteilnetze mit der damit verbundenen verminderten Isolationsfestigkeit oder durch fortschreitenden Kabelausbau ist es immer häufiger erforderlich, anstelle der Resonanz-Sternpunktserdung alternative Konzepte für die Sternpunktsbehandlung umzusetzen. Die damit verbundenen Fehlerortungskonzepte bzw. die Erhöhung der Restströme im Erdschlussfall führen jedoch aufgrund der hohen Fehlerströme zu neuen Anforderungen an die Erdungs- und Fehlerstromrückleitungs-Systeme. Lesen Sie hier über die Auswirkung von leitfähigen Strukturen auf die Stromaufteilung sowie die Potentialverhältnisse in urbanen Kabelnetzen bei stromstarken Erdschlüssen. Jetzt gratis downloaden!

Bildnachweise