Skip to main content
main-content

Über dieses Buch

As e-commerce becomes the norm of business transactions and information be­ comes an essential commodity, it is vital that extensive efforts be made to examine and rectify the problems with the underlying architectures, processes, methods, that are involved in providing and and tools, as well as organizational structures, utilizing services relating to information technology. Such a holistic view of the relevant structures is required in order to identify all of the key aspects that can affect network security. Unfortunately, today's systems and practices, although they have proved to be useful and become widespread, contain significant unnecessary complexity. This complexity provides many loopholes that make systems and practices vulnerable to malicious attacks by hackers as well as by individual and organized criminals. FUrther, there are enormous risks due to malfunction of the systems. The holes in the network system cannot simply be plugged up by the use of cryptography and firewalls. While many changes need to be made in operating systems and system software with respect to security, this alone does not solve the problem. The problems cannot be solved by addressing only a single key aspect of network security. A holistic approach is required. Sumit Ghosh has provided in this book such a holistic view of the area of network security. Thus, it is a most welcome contribution.

Inhaltsverzeichnis

Frontmatter

1. Evolution of Network Security and Lessons Learned from History

Abstract
The business of providing security for the transport of information has been evolving for thousands of years, from the Chinese messenger service through the American Civil War to the two world wars and today’s data networks. With the advent of computer networks and, more recently, the increased reliance on networks and associated resources used by the military, government, industry, and academia, the pace of the evolution has greatly accelerated.
Sumit Ghosh

2. A Fundamental Framework for Network Security

Abstract
A natural starting point in implementing a network security system should consist in a comprehensive definition that includes all areas related to network security and applies to all types of users from the military, government, and industry. Extensive search reveals the lack of such a definition or framework in the literature, and the underlying reason may be described as follows. Different classes of users have developed their own unique definitions to encapsulate their own security concerns, and their frameworks are incompatible with one another. While these unique definitions may have been adequate when networks were closed and isolated, they are inappropriate in today’s climate of increased interconnection between networks. Without a common definition for network security, users can no longer protect their data in interconnected networks. The need for a standard definition is genuine, and it must enable a unified and comprehensive view of security among civilian, military, and government networks. It must provide a basis to address, fundamentally, every weakness in a given network. It must also apply to every level of the network, starting at the highest network-of-networks level and descending to the single computing node that maintains connections with other nodes. In essence, the common standard for defining network security will enable the understanding of the security posture of an individual network, comprehensively facilitate the comparative evaluation of the security of two or more networks, and permit the determination of the resulting security of a composite network formed from connecting two or more networks. The need for a standard definition is genuine, and it must enable a unified and comprehensive view of security among civilian , military, and govern ment networks . It must provide a basis to address, fundamentally, every weakness in a given network . It must also apply to every level of the network, starting at the highest network-of-networks level and descending to the single computing node that maintains connections with other nodes. In essence, the common st andard for defining network security will enable th e und erstanding of t he secur ity posture of an individu al network, comprehensively facilitate the comparative evaluation of the security of two or more networks, and perm it the determination of the resulting security of a composite network formed from connecting two or more networks. It is important to observe that the framework for network security constitutes a methodology for organizing and categorizing actual implementations of network security. The framework does not provide implementations of network security. Inst ead , it offers a map for organizing and describing mechanisms to achieve practical network security. Consider, for example, a specific encryption device th at can both encrypt and decrypt data on a communications link. Whi le the device corresponds to an implementation of network security, the specific security area constitutes communications security. For further details of secur ity devices the reader is referr ed to Stallings [13], Pfleeger [14], and White, Fisch, and Pooch [15].
Sumit Ghosh

3. User-Level Security on Demand in ATM Networks: A New Paradigm

Abstract
Since World War II, the focus in the security community has been on cryptography that aims to protect written traffic through encoding and decoding. With the proliferation of computers and the birth of IP networks, of which the Internet is a prime example, the role of cryptography has also expanded and has continued to dominate network security. Security in the Internet assumes the form of encoding data packets through cryptographic techniques [63] [64] coupled with peer-level, end-to-end authentication mechanisms [65], such as Kerberos [66], at the transport or higher layers of the OSI model. This is necessitated by a fundamental characteristic of store-and-forward networks: that the actual intermediate nodes through which packets propagate are unknown a priori. A potential weakness of this approach may be described as follows. Conceivably, in the worldwide Internet, a data packet, though encoded, may find itself propagating through a node or a set of nodes in an insecure region of the world where it may be intercepted by a hostile unit. While there is always a finite probability, however small, that the hostile unit may successfully break the cryptographic technique, even if the coding is not compromised, the hostile unit may simply destroy the packet, thereby causing the end systems to trigger retransmissions, which, in effect, slows down the network and constitutes a performance attack. The philosophy underlying the security approach in the Internet may be traced to the end-to-end reasoning in the survey paper by Voydok and Kent [67]. They are cognizant of the need to protect the increasing quantity and value of the information being exchanged through the networks of computers, and they assume a network model in which the two ends of any data path terminate in secure areas, while the remainder may be subject to physical attack. Accordingly, cryptographic communicat ions security, i.e., link encry ption, will defeat wiretapping. Furthermore, to defeat intruders who are otherwise legitimate users of the network , authentication and access-control techniques are essential. Voydok and Kent state a crucial assumption: For successful link encryption, all intermediate nodes—packet switches and gateways—must be physically secure, and the har dware and software components must be certified to isolate the inform ation on each packet of data traffic transported through the node. The difficulty with th e assumption in today’s rapid ly expa nding, worldwide, Internet is clear. Increasingly, however, researchers are criticizing the overemphasis on cryptography and are stressing the need to focus on other, equally important, aspects of security, including denial of service and attacks aimed at performance degradation. Power [26] warns of a new kind of threat, information warfare, which consists in disabling or rendering useless the enemy’s key networks including the command and control, power grid [20], financial, and telecommunications networks. It may be pointed out that the literature of the 19708 and 1980s contains a number of references to many of the noncryptographic security concerns that had been proposed primarily for operating systems. Thompson [68] warns of the danger of compiling malicious code, deliberately or accidentally, into an operating system and labels them Trojan horses. In enumerating the basic principles for information protection, Saltzer and Schroeder [69] warn against the unauthorized denial of use and cite, as examples , the crashing of a computer, the disruption of a scheduling algorithm, and the firing of a bullet into a computer. They also propose extending the use of encipherment and decipherment beyond their usual role in communications security to authenticate users. In stating that concealment is not security, Grampp and Morris [70] reflect the reality that computer systems ought to remain open, and clever techniques must be invented to ensure information security.
Sumit Ghosh

4. The Concept of Node Status Indicator (NSI) and Performance Analysis

Abstract
Traditionally, incorporating security into a system has been an afterthought, and the techniques employed have generally been ad hoc. As a result, performance degradation is frequently observed to accompany secure systems. In the network security literature, Runge [88], Hale [89], and Stacey [90] observe that while the current solutions provide security in specific areas, they give rise to performance degradation, fail to allocate resources optimally, and are expensive. A more serious difficulty has been the lack of detailed reports in the network security literature describing the validation of the proposed security mechanisms prior to developing an actual prototype and their performance analysis, especially for large-scale representative systems. Stevenson, Hillery, and Byrd [76] propose the use of cryptography to achieve data privacy and digital signatures to authenticate end users during the setup procedure. Their claim that a secure call setup must complete within 4 seconds of initiating the request is neither accompanied by any scientific validation nor related to any published scientific standard. In principle, the 4-second mandate, as stated, is difficult to generalize, since the call setup time is a function of the relative locations of the source and destination nodes, i.e., whether they are intra-group, inter-group, etc.
Sumit Ghosh

5. “Mixed-Use” Network

Abstract
The current network security paradigm coupled with the desire to transport classified traffic securely has caused the US Department of Defense to maintain its own isolated networks, distinct from the public ATM network infrastructure. Internally, the DoD maintains four types of completely separate and isolated networks to carry Top Secret, Secret, Confidential, and unclassified traffic. A public ATM network may be viewed as carrying unclassified or nonsecure, traffic. While the cost of maintaining four separate network types is becoming increasingly prohibitive to the DoD, the inability of the public and DoD to utilize each other’s network resources runs counter to the current atmosphere of dual use and economies of scale. This chapter introduces the concept of a mixed-use network, wherein the four DoD network types and the public ATM network are coalesced into a single unified network that transports all four types of traffic, efficiently and without compromising security. In a mixed-use network the ATM nodes and links that are common to the DoD and public networks are labeled joint-use, and they must necessarily be placed under the jurisdiction of the military for obvious protection of the security assets. This constitutes the first of two key strategies toward the practical acceptance of the notion of mixed-use networks. The control of all other nodes and links remains unchanged. Under the second strategy, although all joint-use links and nodes are subject to military control, the NSI value for a peer node Y recorded at a node X is the result of a new NSI value received from Y through flooding plus other information on the state of Y that X acquires independently through different mechanisms. The concept of mixed-use is the direct result of the user-level security on demand principle that has recently been introduced in the literature and one that is enabled by the fundamental security framework and the basic characteristic of ATM networks .
Sumit Ghosh

6. Systematic Analysis of Vulnerabilities and Synthesis of Security Attack Models for ATM Networks

Abstract
As complex systems, networks consist of a number of constituent elements that are geographically dispersed, are semiautonomous in nature, and interact with one another and with users, asynchronously. Given that the network design task is already intrinsically complex, it is natural for the traditional network designer to focus, in order to save time and effort, only on those principles and interactions, say D, that help accomplish the key design objectives of the network. The remainder of the interactions, say U, are viewed as “don’t cares” or passive, bearing no adverse impact under normal operating conditions. In reality, however, both internal and external stress may introduce abnormal operating conditions into the network under which the set U may begin to induce any number of unintended effects, even catastrophic failure. A secure network design must not only protect its internal components from obvious attacks from the external world, but, and this is equally important, resist internal attacks from two sources, foreign elements that successfully penetrate into the network and attack from within and one or more of the internal components that spin out of control and become potentially destructive. This chapter introduces the notion of network vulnerability analysis, conceptually organized into three phases. Phase I focuses on systematically examining every possible interaction from the perspective of its impact on the key design objectives of the network, and constitutes an indispensable element of secure network design. Given that the number of interactions in a typical real-world network is large, to render the effort tractable, phase I must be driven from a comprehensive and total understanding of the fundamental principles that define the network. Ph ase I is likely to yield a nonemp ty set of potent ial scenarios und er which t he networ k may become vuln erable. In phase II , each of these weaknesses is selected, one at a time, and where possible, a corresponding at tack model is synthesized. The purpose of the attack model is to manifest the vulnerability through an induced excitement and guide its effect at an observable out put. The attack model assumes the form of a distinct executable code description , encapsulating the abnormal behavior of the network , and assumes an underlying executable code description that emulates the norm al network behavior . In phase III, the attack models are simulated, one at a time, on an appropriate test bed, with two objectives. First , the simulation verifies the thinking underlying the attack model, i.e., whether the attack model succeeds in triggering the vulnerability and forcing its manifestation to be detected at an observable output. When the first objective is met, the simulation often reveals the impact of the attack model on network performance. Under the second objective, the extent of the impact is captured through an innovative metric design. The idea of vulnerability analysis closely resembles the techniques of fault simulation and test generation in the discipline of computer-aided design of integrated circuits (ICs) . Under fault simulation, to detect the presence of faults in a manufactured IC, first a fault model is proposed, reflecting the type of suspected failures, and second, the IC is “fault simulated” to flush out as many of the internal faults as possible at the observable outputs.
Sumit Ghosh

7. Complex Vulnerabilities and Highly Sophisticated Attacks

Abstract
In Chapter 6, vulnerability analysis focused on identifying the weaknesses of a network, starting with the basic principles that underscore the given network. In contrast, this chapter analyzes the key principles and assumptions that define the network itself, in an effort to discover the presence of fundamental weaknesses, if any, and the exact circumstances under which the network may be broken. Clearly, such vulnerabilities are highly complex and are likely to require sophisticated attacks. Consider IP networks, a key strength of which is resilience, stemming from the lack of a priori knowledge of the exact route of any IP packet. A key assumption underlying this resilience is that an IP router, upon intercepting an incident IP packet, forwards it in the general direction of its final destination node toward a subsequent IP node through the least congested link. When a perpetrator intercepts and deliberately discards an IP packet at an intermediate node, the assumption is abruptly broken. Given that the Internet has been in use for some time, the circumstances under which the fundamental assumptions break down are well known. However, for networks that are relatively recent, such circumstances may not be public knowledge, implying that someone, somewhere, may know precisely how to bring a network down.
Sumit Ghosh

8. Future Issues in Information Systems Security

Abstract
Given the incessant proliferation of information system networks into everyday life and the fact that by definition, a network’s resources are shared among its users, network security will continue to play a dominant role. Conceivably, the future will witness networks that encompass very large geographical distances, support enormous increases in the numbers of users, nodes, and links, and offer highly sophisticated services, all of which will impose a greater demand on security.
Sumit Ghosh

Backmatter

Weitere Informationen