Skip to main content
main-content

Über dieses Buch

This book constitutes the thoroughly refereed post-proceedings of the 4th International Workshop on Computational Forensics, IWCF 2010, held in Tokyo, Japan in November 2010. The 16 revised full papers presented together with two invited keynote papers were carefully selected during two rounds of reviewing and revision. The papers cover a wide range of current topics in computational forensics including authentication, biometrics, document analysis, multimedia, forensic tool evaluation, character recognition, and forensic verification.

Inhaltsverzeichnis

Frontmatter

Invited Talks

Gestalt Aspects of Security Patterns

In this contribution, we discuss specific aspects of patterns appearing in a humanized context (as for example in computational security or forensics) and that are not well reflected in a pure feature-classification framework. Gestalt laws are considered as a more appropriate way to approach these aspects, but then, we would merely focus on an empiricial description of matter, where models are needed that might even guide to engineering procedures. For the provision of such procedures, pure image processing based approaches, despite half a century of research, seemingly did not much progress. A recently emerging new family of neural network architectures, based on the model of neural group processing, on the contrary, shows even in its mostly still premature state (from engineering point of view) already a much stronger relevance to the modeling of Gestalt aspects.
Mario Köppen

Physical Security Technologies at Hitachi

Physical security has become one of the most important issues worldwide due to the spreading global use of explosives and illicit drugs. Under this social background, we started developing real-time monitoring technologies based on mass spectrometry for physical security applications. In these technologies, a sample gas is continuously introduced into an ion source and analyzed by a mass spectrometer. We can detect various kinds of organic compounds by analyzing the mass number of observed ions. This technology has been applied to monitor polychlorinated biphenyls and to detect explosives, illicit drugs and chemical weapons. In addition, we developed a small mass spectrometer that can detect human breath. This simple method is useful for preventing drunk driving by installing its device just behind a steering wheel.
Minoru Sakairi

Session 1: Authentication

Exploiting Character Class Information in Forensic Writer Identification

Questioned document examination is extensively used by forensic specialists for criminal identification. This paper presents a writer recognition system based on contour features operating in identification mode (one-to-many) and working at the level of isolated characters. Individual characters of a writer are manually segmented and labeled by an expert as pertaining to one of 62 alphanumeric classes (10 numbers and 52 letters, including lowercase and uppercase letters), being the particular setup used by the forensic laboratory participating in this work. Three different scenarios for identity modeling are proposed, making use to a different degree of the class information provided by the alphanumeric samples. Results obtained on a database of 30 writers from real forensic documents show that the character class information given by the manual analysis provides a valuable source of improvement, justifying the significant amount of time spent in manual segmentation and labeling by the forensic specialist.
Fernando Alonso-Fernandez, Julian Fierrez, Javier Galbally, Javier Ortega-Garcia

Toward Forensics by Stroke Order Variation — Performance Evaluation of Stroke Correspondence Methods

We consider personal identification using stroke order variations of online handwritten character patterns, which are written on, e.g., electric tablets. To extract the stroke order variation of an input character pattern, it is necessary to establish the accurate stroke correspondence between the input pattern and the reference pattern of the same category. In this paper we compare five stroke correspondence methods: the individual correspondence decision (ICD), the cube search (CS), the bipartite weighted matching (BWM), the stable marriage (SM), and the deviation-expansion model (DE). After their brief review, they are experimentally compared quantitatively by not only their stroke correspondence accuracy but also character recognition accuracy. The experimental results showed the superiority CS and BWM over ICD, SM and DE.
Wenjie Cai, Seiichi Uchida, Hiroaki Sakoe

A Novel Seal Imprint Verification Method Based on Analysis of Difference Images and Symbolic Representation

This paper proposes a novel seal imprint verification method with difference image based statistical feature extraction and symbolic representation based classification. After several image processing procedures including seal imprint extraction and seal registration with the model seal imprint, the statistical feature was extracted from difference images for the pattern classification system of seal verification. Symbolic representation method which requires only genuine samples in the learning phase was used to classify genuine and fake seal imprints. We built up a seal imprint image database for training the seal verification algorithms and testing the proposed verification system. Experiments showed that the symbolic representation method was superior to traditional SVM classifier in this task. Experiments also showed that our statistical feature was very powerful for seal verification application.
Xiaoyang Wang, Youbin Chen

Session 2: Biometrics1

Extraction of 3D Shape of a Tooth from Dental CT Images with Region Growing Method

Dental information is useful for personal identification. In this paper, a method for extracting three-dimensional shape of a tooth automatically from dental CT images is proposed. In the previous method, one of the main issue is the mis-extraction of the adjacent region caused by the similarity of feature between a tooth and its adjacent teeth or the surrounding alveolar bone. It is important to extract an accurate shape of the target tooth as an independent part from the adjacent region. In the proposed method, after denoising, the target tooth is segmented to parts such as a shaft of a tooth or a dental enamel by the mean shift clustering. Then, some segments in the certain tooth is extracted as a certain region by the region growing method. Finally, the contour of the tooth is specified by applying the active contour method, and the shape of the tooth is extracted.
Ryuichi Yanagisawa, Shinichiro Omachi

Cancellable Face Biometrics System by Combining Independent Component Analysis Coefficients

A number of biometric characteristics exist for person identity verification. Each biometric has its strengths. However, they also suffer from disadvantages, for example, in the area of privacy protection. Security and privacy issues are becoming more important in the biometrics community. To enhance security and privacy in biometrics, cancellable biometrics have been introduced. In this paper, we propose cancellable biometrics for face recognition using an appearance based approach. Initially, an ICA coefficient vector is extracted from an input face image. Some components of this vector are replaced randomly from a Gaussian distribution which reflects the original mean and variance of the components. Then, the vector, with its components replaced, has its elements scrambled randomly. A new transformed face coefficient vector is generated by choosing the minimum or maximum component of multiple (two or more) differing cases of such transformed coefficient vectors. In our experiments, we compared the performance between the cases when ICA coefficient vectors are used for verification and when the transformed coefficient vectors are used for verification. We also examine the properties of changeability and reproducibility for the proposed method.
MinYi Jeong, Andrew Beng Jin Teoh

Footwear Print Retrieval System for Real Crime Scene Marks

Footwear impression evidence has been gaining increasing importance in forensic investigation. The most challenging task for a forensic examiner is to work with highly degraded footwear marks and match them to the most similar footwear print available in the database. Retrieval process from a large database can be made significantly faster if the database footwear prints are clustered beforehand. In this paper we propose a footwear print retrieval system which uses the fundamental shapes in shoes like lines, circles and ellipses as features and retrieves the most similar print from a clustered database. Prints in the database are clustered based on outsole patterns. Each footwear print pattern is characterized by the combination of shape features and represented by an Attributed Relational Graph. Similarity between prints is computed using Footwear Print Distance. The proposed system is invariant to distortions like scale, rotation, translation and works well with the partial prints, color prints and crime scene marks.
Yi Tang, Sargur N. Srihari, Harish Kasiviswanathan, Jason J. Corso

Session 3: Documents

Improvement of Inkjet Printer Spur Gear Teeth Number Estimation by Fixing the Order in Maximum Entropy Spectral Analysis

In this paper, we estimated the number of inkjet printer spur gear teeth from shorter pitch data strings than previous study by fixing the order in maximum entropy method (MEM). The purpose of this study is to improve the efficiency of inkjet printer model identification based on spur mark comparison method (SCM) in the field of forensic document analysis. Experiments were performed using two spur gears in different color inkjet printer models. The eight kinds of pitch data length whose length ranges from three to 10 rotations of spur gear was provided for analysis. The experimental results showed that proper teeth number was estimated from shorter pitch data string compared with the strategies based on minimum AIC estimate in our previous study. The estimation was successful from the short data length nearly equal to the condition of nyquist frequency. The proposed method was considered to improve the accuracy of printer model identification based on SCM.
Yoshinori Akao, Atsushi Yamamoto, Yoshiyasu Higashikawa

Detecting Indentations on Documents Pressed by Pen Tip Force Using a Near Infrared Light Emitting Diode (NIR LED) and Shape from Shading

We proposed the new method that detected indentations pressed by pen tip force on paper using an oblique near infrared (NIR) emitting by light emitting diodes (LEDs). According to conventional methods indentations were observed by document examiners’ eyes using a microscope. However it is difficult to estimate depths of the indentations because human eyes only can observe shades and brightness made by indentations instead of measuring the depths of indentations. Using a confocal laser microscope is able to directly measure the depths of indentations. However this method needs to take long time and instruments are expensive. Using a NIR LED and an optical model called shape from shading resolved the issues of time and cost. It is useful for forensic document examiners to approximately evaluate the depths of indentations of handwriting by our proposal method because the method will be lead to convenient discrimination between forgery handwriting and genuine one.
Takeshi Furukawa

Session 4: Multimedia

Similar Partial Copy Detection of Line Drawings Using a Cascade Classifier and Feature Matching

Copyright protection of image publications is an important task of forensics. In this paper, we focus on line drawings, which are represented by lines in monochrome. Since partial copies and similar copies are always applied in plagiarisms of line drawings, we propose combining the technique of object detection and image retrieval to detect similar partial copies from suspicious images: first, detecting regions of interest (ROIs) by a cascade classifier; then, locate the corresponding source parts from copyrighted images using a feature matching method. The experimental results have proved the effectiveness of proposed method for detecting similar partial copies from complex backgrounds.
Weihan Sun, Koichi Kise

Detection of Malicious Applications on Android OS

The paper presents a methodology for mobile forensics analysis, to detect “malicious” (or “malware”) applications, i.e., those that deceive users hiding some of their functionalities. This methodology is specifically targeted for the Android mobile operating system, and relies on its security model features, namely the set of permissions exposed by each application. The methodology has been trained on more than 13,000 applications hosted on the Android Market, collected with AppAware. A case study is presented as a preliminary validation of the methodology.
Francesco Di Cerbo, Andrea Girardello, Florian Michahelles, Svetlana Voronkova

JPEG Quantization Tables Forensics: A Statistical Approach

Many digital image forensics techniques using various fingerprints which identify the image source are dependent on data on digital images from an unknown environment. As often software modifications leave no appropriate traces in image metadata, critical miscalculations of fingerprints arise. This is the problem addressed in this paper. Modeling information noise, we introduce a statistical approach for noise-removal in databases consisted of “unguaranteed” images. In this paper, employed fingerprints are based on JPEG quantization tables.
Babak Mahdian, Stanislav Saic, Radim Nedbal

Session 5: Biometrics2

Discovering Correspondences between Fingerprints Based on the Temporal Dynamics of Eye Movements from Experts

Latent print examinations involve a process by which a latent print, often recovered from a crime scene, is compared against a known standard or sets of standard prints. Despite advances in automatic fingerprint recognition, latent prints are still examined by human expert primarily due to the poor image quality of latent prints. The aim of the present study is to better understand the perceptual and cognitive processes of fingerprint practices as implicit expertise. Our approach is to collect fine-grained gaze data from fingerprint experts when they conduct a matching task between two prints. We then rely on machine learning techniques to discover meaningful patterns from their eye movement data. As the first steps in this project, we compare gaze patterns from experts with those obtained from novices. Our results show that experts and novices generate similar overall gaze patterns. However, a deeper data analysis using machine translation reveals that experts are able to identify more corresponding areas between two prints within a short period of time.
Chen Yu, Thomas Busey, John Vanderkolk

Latent Fingerprint Rarity Analysis in Madrid Bombing Case

Rarity of latent fingerprints is important to law enforcement agencies in forensics analysis. While tremendous efforts have been made in 10-print individuality studies, latent fingerprint rarity continues to be a difficult problem and has never been solved because of the small finger area and poor impression quality. The proposed method is able to predict the core points of latent prints using Gaussian processes and align the latent prints by overlapping the core points. A novel generative model is also proposed to take into account the dependency on nearby minutiae and the confidence of minutiae in the probability of random correspondence calculation. The new methods are illustrated by experiments on the well-known Madrid bombing case. The results show that the probability that at least one fingerprint in the FBI IAFIS databases (over 470 million fingerprints) matches the bomb site latent is 0.93 which is large enough to lead to misidentification.
Chang Su, Sargur N. Srihari

Fingerprint Directional Image Enhancement

This work presents an efficient algorithm to enhance the directional image. Orientation, as a global feature of fingerprint, is very important to image preprocessing methods used in automatic fingerprint identification system (AFIS). Proposed algorithm consists of two weighted averaging stages over a neighborhood. In the first stage the 2D gaussian kernel is used as a weight and in the second stage the differentiation mask is used. That strategy allows an effective smoothing of the orientation in the noisy areas, without information loss in the high curvatured ridges areas. Experimental results show that the proposed method improves the performance of the fingerprint identification, comparing to the results obtained by the conventional, gradient based method.
Lukasz Wieclaw

Session 6: Evaluation

What Kind of Strategies Does a Document Examiner Take in Handwriting Identification?

A document examiner examines handwriting mainly by a qualitative method based on his/her knowledge and experiences. The qualitative examination, compared with the quantitative examination, possesses less objectivity and is believed to be less reliable. However, an examiner’s opinion is, in fact, highly reliable. The knowledge and strategies that a document examiner uses is discussed in this paper. Four kinds of classification experiments where diagram-like 36 handwriting samples written by 6 writers were used as stimuli were done by visual inspection and cluster analysis. Results of the experiments suggested that the examiner utilized his knowledge on writing motion even when the classification target was a diagram drawn by the reconstruction of a handwriting sample.
Yoko Seki

3LSPG: Forensic Tool Evaluation by Three Layer Stochastic Process-Based Generation of Data

Since organizations cannot prevent all criminal activities of employees by security technology in practice, the application of IT forensic methods for finding traces in data is extremely important. However, new attack variants for occupational crime require new forensic tools and specific environments may require adoptions of methods and tools. Obviously, the development of tools or their adaption require testing using data containing corresponding traces of attacks. Since real-world data are often not available synthetic data are necessary to perform testing. With 3LSPG we propose a systematic method to generate synthetic test data which contain traces of selected attacks. These data can then be used to evaluate the performance of different forensic tools.
York Yannikos, Frederik Franke, Christian Winter, Markus Schneider

Backmatter

Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!

Bildnachweise