Skip to main content
main-content

Über dieses Buch

ADVANCES IN DIGITAL FORENSICS XIV
Edited by: Gilbert Peterson and Sujeet Shenoi

Digital forensics deals with the acquisition, preservation, examination, analysis and presentation of electronic evidence. Computer networks, cloud computing, smartphones, embedded devices and the Internet of Things have expanded the role of digital forensics beyond traditional computer crime investigations. Practically every crime now involves some aspect of digital evidence; digital forensics provides the techniques and tools to articulate this evidence in legal proceedings. Digital forensics also has myriad intelligence applications; furthermore, it has a vital role in information assurance - investigations of security breaches yield valuable information that can be used to design more secure and resilient systems.

Advances in Digital Forensics XIV describes original research results and innovative applications in the discipline of digital forensics. In addition, it highlights some of the major technical and legal issues related to digital evidence and electronic crime investigations. The areas of coverage include: Themes and Issues; Forensic Techniques; Network Forensics; Cloud Forensics; and Mobile and Embedded Device Forensics.

This book is the fourteenth volume in the annual series produced by the International Federation for Information Processing (IFIP) Working Group 11.9 on Digital Forensics, an international community of scientists, engineers and practitioners dedicated to advancing the state of the art of research and practice in digital forensics. The book contains a selection of nineteen edited papers from the Fourteenth Annual IFIP WG 11.9 International Conference on Digital Forensics, held in New Delhi, India in the winter of 2018.

Advances in Digital Forensics XIV is an important resource for researchers, faculty members and graduate students, as well as for practitioners and individuals engaged in research and development efforts for the law enforcement and intelligence communities.

Gilbert Peterson, Chair, IFIP WG 11.9 on Digital Forensics, is a Professor of Computer Engineering at the Air Force Institute of Technology, Wright-Patterson Air Force Base, Ohio, USA.

Sujeet Shenoi is the F.P. Walter Professor of Computer Science and a Professor of Chemical Engineering at the University of Tulsa, Tulsa, Oklahoma, USA.

Inhaltsverzeichnis

Frontmatter

Themes and Issues

Frontmatter

Measuring Evidential Weight in Digital Forensic Investigations

Abstract
This chapter describes a method for obtaining a quantitative measure of the relative weight of each individual item of evidence in a digital forensic investigation using a Bayesian network. The resulting evidential weights can then be used to determine a near-optimal, cost-effective triage scheme for the investigation in question.
Richard Overill, Kam-Pui Chow

Challenges, Opportunities and a Framework for Web Environment Forensics

Abstract
The web has evolved into a robust and ubiquitous platform, changing almost every aspect of people’s lives. The unique characteristics of the web pose new challenges to digital forensic investigators. For example, it is much more difficult to gain access to data that is stored online than it is to access data on the hard drive of a laptop. Despite the fact that data from the web is more challenging for forensic investigators to acquire and analyze, web environments continue to store more data than ever on behalf of users.
This chapter discusses five critical challenges related to forensic investigations of web environments and explains their significance from a research perspective. It presents a framework for web environment forensics comprising four components: (i) evidence discovery and acquisition; (ii) analysis space reduction; (iii) timeline reconstruction; and (iv) structured formats. The framework components are non-sequential in nature, enabling forensic investigators to readily incorporate the framework in existing workflows. Each component is discussed in terms of how an investigator might use the component, the challenges that remain for the component, approaches related to the component and opportunities for researchers to enhance the component.
Mike Mabey, Adam Doupé, Ziming Zhao, Gail-Joon Ahn

Internet of Things Forensics – Challenges and a Case Study

Abstract
During this era of the Internet of Things, millions of devices such as automobiles, smoke detectors, watches, glasses and webcams are being connected to the Internet. The number of devices with the ability of monitor and collect data is continuously increasing. The Internet of Things enhances human comfort and convenience, but it raises serious questions related to security and privacy. It also creates significant challenges for digital investigators when they encounter Internet of Things devices in criminal scenes. In fact, current research focuses on security and privacy in Internet of Things environments as opposed to forensic acquisition and analysis techniques for Internet of Things devices. This chapter focuses on the major challenges with regard to Internet of Things forensics. A forensic approach for Internet of Things devices is presented using a smartwatch as a case study. Forensic artifacts retrieved from the smartwatch are analyzed and the evidence found is discussed with respect to the challenges facing Internet of Things forensics.
Saad Alabdulsalam, Kevin Schaefer, Tahar Kechadi, Nhien-An Le-Khac

Forensic Techniques

Frontmatter

Recovery of Forensic Artifacts from Deleted Jump Lists

Abstract
Jump lists, which were introduced in the Windows 7 desktop operating system, have attracted the interest of researchers and practitioners in the digital forensics community. The structure and forensic implications of jump lists have been explored widely. However, little attention has focused on anti-forensic activities such as jump list evidence modification and deletion. This chapter proposes a new methodology for identifying deleted entries in the Windows 10 AutoDest type of jump list files and recovering the deleted entries. The proposed methodology is best suited to scenarios where users intentionally delete jump list entries to hide evidence related to their activities. The chapter also examines how jump lists are impacted when software applications are installed and when the associated files are accessed by external storage devices. In particular, artifacts related to file access, such as the lists of most recently used and most frequently used files, file modification, access and creation timestamps, names of applications used to access files, file paths, volume names and serial numbers from where the files were accessed, can be recovered even after entries are removed from the jump lists and the software applications are uninstalled. The results demonstrate that the analysis of jump lists is immensely helpful in constructing the timelines of user activities on Windows 10 systems.
Bhupendra Singh, Upasna Singh, Pankaj Sharma, Rajender Nath

Obtaining Precision-Recall Trade-Offs in Fuzzy Searches of Large Email Corpora

Abstract
Fuzzy search is often used in digital forensic investigations to find words that are stringologically similar to a chosen keyword. However, a common complaint is the high rate of false positives in big data environments. This chapter describes the design and implementation of cedas, a novel constrained edit distance approximate string matching algorithm that provides complete control over the types and numbers of elementary edit operations considered in approximate matches. The unique flexibility of cedas facilitates fine-tuned control of precision-recall trade-offs. Specifically, searches can be constrained to the union of matches resulting from any exact edit combination of insertion, deletion and substitution operations performed on the search term. The flexibility is leveraged in experiments involving fuzzy searches of an inverted index of the Enron corpus, a large English email dataset, which reveal the specific edit operation constraints that should be applied to achieve valuable precision-recall trade-offs. The constraints that produce relatively high combinations of precision and recall are identified, along with the combinations of edit operations that cause precision to drop sharply and the combination of edit operation constraints that maximize recall without sacrificing precision substantially. These edit operation constraints are potentially valuable during the middle stages of a digital forensic investigation because precision has greater value in the early stages of an investigation while recall becomes more valuable in the later stages.
Kyle Porter, Slobodan Petrovic

Anti-Forensic Capacity and Detection Rating of Hidden Data in the Ext4 Filesystem

Abstract
The rise of cyber crime and the growing number of anti-forensic tools demand more research on combating anti-forensics. A prominent anti-forensic paradigm is the hiding of data at different abstraction layers, including the filesystem layer. This chapter evaluates various techniques for hiding data in the ext4 filesystem, which is commonly used by Android devices. The evaluation uses the capacity and detection rating metrics. Capacity reflects the quantity of data that can be concealed using a hiding technique. Detection rating is the difficulty of finding the concealed artifacts; specifically, the amount of effort required to discover the artifacts. Well-known data hiding techniques as well as new techniques proposed in this chapter are evaluated.
Thomas Göbel, Harald Baier

Detecting Data Leakage from Hard Copy Documents

Abstract
Document fraud has evolved to become a significant threat to individuals and organizations. Data leakage from hard copy documents is a common type of fraud. This chapter proposes a methodology for analyzing printed and photocopied versions of confidential documents to identify the source of a leak. The methodology incorporates a novel font pixel manipulation algorithm that embeds data in the pixels of certain characters of confidential documents in a manner that is imperceptible to the human eye. The embedded data is extracted from a leaked printed or photocopied document to identify the specific document that served as the source. The embedded data is robust in that it can withstand errors introduced by printing, scanning and photocopying documents. Experimental results demonstrate the efficiency, robustness and security of the methodology.
Jijnasa Nayak, Shweta Singh, Saheb Chhabra, Gaurav Gupta, Monika Gupta, Garima Gupta

Network Forensics

Frontmatter

Information-Entropy-Based DNS Tunnel Prediction

Abstract
DNS tunneling techniques are often used for malicious purposes. Network security mechanisms have struggled to detect DNS tunneling. Network forensic analysis has been proposed as a solution, but it is slow, invasive and tedious as network forensic analysis tools struggle to deal with undocumented and new network tunneling techniques.
This chapter presents a method for supporting forensic analysis by automating the inference of tunneled protocols. The internal packet structure of DNS tunneling techniques is analyzed and the information entropy of various network protocols and their DNS tunneled equivalents are characterized. This provides the basis for a protocol prediction method that uses entropy distribution averaging. Experiments demonstrate that the method has a prediction accuracy of 75%. The method also preserves privacy because it only computes the information entropy and does not parse the actual tunneled content.
Irvin Homem, Panagiotis Papapetrou, Spyridon Dosis

Collecting Network Evidence Using Constrained Approximate Search Algorithms

Abstract
Intrusion detection systems are defensive tools that identify malicious activities in networks and hosts. In network forensics, investigators often study logs that store alerts generated by intrusion detection systems. This research focuses on Snort, a widely-used, open-source, misuse-based intrusion detection system that detects network intrusions based on a pre-defined set of attack signatures. When a security breach occurs, a forensic investigator typically starts by examining network log files. However, Snort cannot detect unknown attacks (i.e., zero-day attacks) even when they are similar to known attacks; as a result, an investigator may lose evidence in a criminal case.
This chapter demonstrates the ease with which it is possible to defeat the detection of malicious activity by Snort and the possibility of using constrained approximate search algorithms instead of the default Snort search algorithm to collect evidence. Experimental results of the performance of constrained approximate search algorithms demonstrate that they are capable of detecting previously unknown attack attempts that are similar to known attacks. While the algorithms generate additional false positives, the number of false positives can be reduced by the careful choice of constraint values in the algorithms.
Ambika Shrestha Chitrakar, Slobodan Petrovic

Traffic Classification and Application Identification in Network Forensics

Abstract
Network traffic classification is an absolute necessity for network monitoring, security analyses and digital forensics. Without accurate traffic classification, the computational demands imposed by analyzing all the IP traffic flows are enormous. Classification can also reduce the number of flows that need to be examined and prioritized for analysis in forensic investigations.
This chapter presents an automated feature elimination method based on a feature correlation matrix. Additionally, it proposes an enhanced statistical protocol identification method, which is compared against Bayesian network and random forests classification methods that offer high accuracy and acceptable performance. Each classification method is used with a subset of features that best suit the method. The methods are evaluated based on their ability to identify the application layer protocols and the applications themselves. Experiments demonstrate that the random forests classifier yields the most promising results whereas the proposed enhanced statistical protocol identification method provides an interesting trade-off between higher performance and slightly lower accuracy.
Jan Pluskal, Ondrej Lichtner, Ondrej Rysavy

Enabling Non-Expert Analysis OF Large Volumes OF Intercepted Network Traffic

Abstract
Telecommunications wiretaps are commonly used by law enforcement in criminal investigations. While phone-based wiretapping has seen considerable success, the same cannot be said for Internet taps. Large portions of intercepted Internet traffic are often encrypted, making it difficult to obtain useful information. The advent of the Internet of Things further complicates network wiretapping. In fact, the current level of complexity of intercepted network traffic is almost at the point where data cannot be analyzed without the active involvement of experts. Additionally, investigations typically focus on analyzing traffic in chronological order and predominately examine the data content of the intercepted traffic. This approach is overly arduous when the amount of data to be analyzed is very large.
This chapter describes a novel approach for analyzing large amounts of intercepted network traffic based on traffic metadata. The approach significantly reduces the analysis time and provides useful insights and information to non-technical investigators. The approach is evaluated using a large sample of network traffic data.
Erwin van de Wiel, Mark Scanlon, Nhien-An Le-Khac

Hashing Incomplete and Unordered Network Streams

Abstract
Deep packet inspection typically uses MD5 whitelists/blacklists or regular expressions to identify viruses, malware and certain internal files in network traffic. Fuzzy hashing, also referred to as context-triggered piecewise hashing, can be used to compare two files and determine their level of similarity. This chapter presents the stream fuzzy hash algorithm that can hash files on the fly regardless of whether the input is unordered, incomplete or has an initially-undetermined length. The algorithm, which can generate a signature of appropriate length using a one-way process, reduces the computational complexity from \(O\left( n \log n\right) \) to O(n). In a typical deep packet inspection scenario, the algorithm hashes files at the rate of 68 MB/s per CPU core and consumes no more than 5 KB of memory per file. The effectiveness of the stream fuzzy hash algorithm is evaluated using a publicly-available dataset. The results demonstrate that, unlike other fuzzy hash algorithms, the precision and recall of the stream fuzzy hash algorithm are not compromised when processing unordered and incomplete inputs.
Chao Zheng, Xiang Li, Qingyun Liu, Yong Sun, Binxing Fang

A Network Forensic Scheme Using Correntropy-Variation for Attack Detection

Abstract
Network forensic techniques help track cyber attacks by monitoring and analyzing network traffic. However, due to the large volumes of data in modern networks and sophisticated attacks that mimic normal behavior and/or erase traces to avoid detection, network attack investigations demand intelligent and efficient network forensic techniques. This chapter proposes a network forensic scheme for monitoring and investigating network-based attacks. The scheme captures and stores network traffic data, selects important network traffic features using the chi-square statistic and detects anomalous events using a novel correntropy-variation technique. An evaluation of the network forensic scheme employing the UNSW-NB15 dataset demonstrates its utility and high performance compared with three state-of-the-art approaches.
Nour Moustafa, Jill Slay

Cloud Forensics

Frontmatter

A Taxonomy of Cloud Endpoint Forensic Tools

Abstract
Cloud computing services can be accessed via browsers or client applications on networked devices such as desktop computers, laptops, tablets and smartphones, which are generally referred to as endpoint devices. Data relevant to forensic investigations may be stored on endpoint devices and/or at cloud service providers. When cloud services are accessed from an endpoint device, several files and folders are created on the device; the data can be accessed by a digital forensic investigator using various tools. An investigator may also use an application programming interface made available by a cloud service provider to obtain forensic information from the cloud related to objects, events and file metadata associated with a cloud user. This chapter presents a taxonomy of the forensic tools used to extract data from endpoint devices and from cloud service providers. The tool taxonomy provides investigators with an easily searchable catalog of tools that can meet their technical requirements during cloud forensic investigations.
Anand Kumar Mishra, Emmanuel Pilli, Mahesh Govil

A Layered Graphical Model for Cloud Forensic Mission Attack Impact Analysis

Abstract
Cyber attacks on the systems that support an enterprise’s mission can significantly impact its objectives. This chapter describes a layered graphical model designed to support forensic investigations by quantifying the mission impacts of cyber attacks. The model has three layers: (i) an upper layer that models operational tasks and their interdependencies that fulfill mission objectives; (ii) a middle layer that reconstructs attack scenarios based on the interrelationships of the available evidence; and (iii) a lower level that uses system calls executed in upper layer tasks in order to reconstruct missing attack steps when evidence is missing. The graphs constructed from the three layers are employed to compute the impacts of attacks on enterprise missions. The National Vulnerability Database – Common Vulnerability Scoring System scores and forensic investigator estimates are used to compute the mission impacts. A case study is presented to demonstrate the utility of the graphical model.
Changwei Liu, Anoop Singhal, Duminda Wijesekera

Mobile and Embedded Device Forensics

Frontmatter

Forensic Analysis of Android Steganography Apps

Abstract
The processing power of smartphones supports steganographic algorithms that were considered to be too computationally intensive for handheld devices. Several steganography apps are now available on mobile phones to support covert communications using digital photographs.
This chapter focuses on two key questions: How effectively can a steganography app be reverse engineered? How can this knowledge help improve the detection of steganographic images and other related files? Two Android steganography apps, PixelKnot and Da Vinci Secret Image, are analyzed. Experiments demonstrate that they are constructed in very different ways and provide different levels of security for hiding messages. The results of detecting steganography files, including images generated by the apps, using three software packages are presented. The results point to an urgent need for further research on reverse engineering steganography apps and detecting images produced by these apps.
Wenhao Chen, Yangxiao Wang, Yong Guan, Jennifer Newman, Li Lin, Stephanie Reinders

Automated Vulnerability Detection in Embedded Devices

Abstract
Embedded devices are widely used today and are rapidly being incorporated in the Internet of Things that will permeate every aspect of society. However, embedded devices have vulnerabilities such as buffer overflows, command injections and backdoors that are often undocumented. Malicious entities who discover these vulnerabilities could exploit them to gain control of embedded devices and conduct a variety of criminal activities.
Due to the large number of embedded devices, non-standard codebases and complex control flows, it is extremely difficult to discover vulnerabilities using manual techniques. Current automated vulnerability detection tools typically use static analysis, but the detection accuracy is not high. Some tools employ code execution; however, this approach is inefficient, detects limited types of vulnerabilities and is restricted to specific architectures. Other tools use symbolic execution, but the level of automation is not high and the types of vulnerabilities they uncover are limited. This chapter evaluates several advanced vulnerability detection techniques used by current tools, especially those involving automated program analysis. These techniques are leveraged in a new automated vulnerability detection methodology for embedded devices.
Danjun Liu, Yong Tang, Baosheng Wang, Wei Xie, Bo Yu

A Forensic Logging System for Siemens Programmable Logic Controllers

Abstract
Critical infrastructure assets are monitored and managed by industrial control systems. In recent years, these systems have evolved to adopt common networking standards that expose them to cyber attacks. Since programmable logic controllers are core components of industrial control systems, forensic examinations of these devices are vital during responses to security incidents. However, programmable logic controller forensics is a challenging task because of the lack of effective logging systems.
This chapter describes the design and implementation of a novel programmable logic controller logging system. Several tools are available for generating programmable logic controller audit logs; these tools monitor and record the values of programmable logic controller memory variables for diagnostic purposes. However, the logged information is inadequate for forensic investigations. To address this limitation, the logging system extracts data from Siemens S7 communications protocol traffic for forensic purposes. The extracted data is saved in an audit log file in an easy-to-read format that enables a forensic investigator to efficiently examine the activity of a programmable logic controller.
Ken Yau, Kam-Pui Chow, Siu-Ming Yiu

Enhancing the Security and Forensic Capabilities of Programmable Logic Controllers

Abstract
Industrial control systems are used to monitor and operate critical infrastructures. For decades, the security of industrial control systems was preserved by their use of proprietary hardware and software, and their physical separation from other networks. However, to reduce costs and enhance interconnectivity, modern industrial control systems increasingly use commodity hardware and software, and are connected to vendor and corporate networks, and even the Internet. These trends expose industrial control systems to risks that they were not designed to handle.
This chapter describes a novel approach for enhancing industrial control system security and forensics by adding monitoring and logging mechanisms to programmable logic controllers, key components of industrial control systems. A proof-of-concept implementation is presented using a popular Siemens programmable logic controller. Experiments were conducted to compare the accuracy and performance impact of the proposed method versus the conventional programmable logic controller polling method. The experimental results demonstrate that the new method yields increased anomaly detection coverage and accuracy with only a small performance impact. Additionally, the new method increases the speed of anomaly detection and reduces network overhead, enabling forensic investigations of programmable logic controllers to be conducted more efficiently and effectively.
Chun-Fai Chan, Kam-Pui Chow, Siu-Ming Yiu, Ken Yau
Weitere Informationen

Premium Partner

    Bildnachweise