Skip to main content
main-content

Über dieses Buch

The two volume set LNCS 6854/6855 constitutes the refereed proceedings of the International Conference on Computer Analysis of Images and Patterns, CAIP 2011, which took place in Seville, Spain, August 29-31, 2011. The 138 papers presented together with 2 invited talks were carefully reviewed and selected from 286 submissions. The papers are organized in topical section on: motion analysis, image and shape models, segmentation and grouping, shape recovery, kernel methods, medical imaging, structural pattern recognition, Biometrics, image and video processing, calibration; and tracking and stereo vision.

Inhaltsverzeichnis

Frontmatter

Invited Lecture

Metric Structures on Datasets: Stability and Classification of Algorithms

Several methods in data and shape analysis can be regarded as transformations between metric spaces. Examples are hierarchical clustering methods, the higher order constructions of computational persistent topology, and several computational techniques that operate within the context of data/shape matching under invariances.

Metric geometry, and in particular different variants of the Gromov-Hausdorff distance provide a point of view which is applicable in different scenarios. The underlying idea is to regard datasets as metric spaces, or metric measure spaces (a.k.a. mm-spaces, which are metric spaces enriched with probability measures), and then, crucially, at the same time regard the collection of all datasets as a metric space in itself. Variations of this point of view give rise to different taxonomies that include several methods for extracting information from datasets.

Imposing metric structures on the collection of all datasets could be regarded as a ”soft” construction. The classification of algorithms, or the axiomatic characterization of them, could be achieved by imposing the more ”rigid” category structures on the collection of all finite metric spaces and demanding functoriality of the algorithms. In this case, one would hope to single out all the algorithms that satisfy certain natural conditions, which would clarify the landscape of available methods. We describe how using this formalism leads to an axiomatic description of many clustering algorithms, both flat and hierarchical.

Facundo Mémoli

Biometrics

Semi-fragile Watermarking in Biometric Systems: Template Self-Embedding

Embedding biometric templates as image-dependent watermark information in semi-fragile watermark embedding is proposed. Experiments in an iris recognition environment show that the embedded templates can be used to verify sample data integrity and may serve additionally to increase robustness in the biometric recognition process.

Reinhard Huber, Herbert Stögner, Andreas Uhl

The Weighted Landmark-Based Algorithm for Skull Identification

Computer aided craniofacial reconstruction plays an important role in criminal investigation. By comparing the 3D facial model produced by this technology with the picture database of missing persons, the identity of an unknown skull can be determined. In this paper, we propose a method to quantitatively analyze the quality of the facial landmarks for skull identification. Based on the quality analysis of landmarks, a new landmark-based algorithm, which takes fully into account the different weights of the landmarks in the recognition, is proposed. Moreover, we can select an optimal recognition subset of landmarks to boost the recognition rate according to the recognition quality of landmarks. Experiments validate the proposed method.

Jingbo Huang, Mingquan Zhou, Fuqing Duan, Qingqong Deng, Zhongke Wu, Yun Tian

Sequential Fusion Using Correlated Decisions for Controlled Verification Errors

Fusion techniques have received considerable attention for achieving lower error rates with biometrics. A fused classifier architecture based on sequential integration of multi-instance and multi-sample fusion schemes allows controlled trade-off between false alarms and false rejects. Expressions for each type of error for the fused system have previously been derived for the case of statistically independent classifier decisions. It is shown in this paper that the performance of this architecture can be improved by modelling the correlation between classifier decisions. Correlation modelling also enables better tuning of fusion model parameters, ‘

N

’, the number of classifiers and ‘

M

’, the number of attempts/samples, and facilitates the determination of error bounds for false rejects and false accepts for each specific user. Error trade-off performance of the architecture is evaluated using HMM based speaker verification on utterances of individual digits. Results show that performance is improved for the case of favourable correlated decisions. The architecture investigated here is directly applicable to speaker verification from spoken digit strings such as credit card numbers in telephone or voice over internet protocol based applications. It is also applicable to other biometric modalities such as finger prints and handwriting samples.

Vishnu Priya Nallagatla, Vinod Chandran

An Online Three-Stage Method for Facial Point Localization

Finding facial features respectively under expression and illumination variations is always a difficult problem. One popular solution for improving the performance of facial point localization is to use the spatial relation between facial feature positions. While existing algorithms mostly rely on the priori knowledge of facial structure and on a training phase, this paper presents an online approach without requirements of pre-defined constraints on feature distributions. Instead of training specific detectors for each facial feature, a generic method is first used to extract a set of interest points from test images. With a robust feature descriptor named Patterns Oriented Edge Magnitude (POEM) histogram, a smaller set of these points are picked as candidates. Then we apply a game-theoretic technique to select facial points from the candidates, while the global geometric properties of face are well preserved. The experimental results demonstrate that our method achieves satisfactory performance for face images under expression and lighting variations.

Weiyuan Ni, Ngoc-Son Vu, Alice Caplier

Extraction of Teeth Shapes from Orthopantomograms for Forensic Human Identification

Dental biometrics are commonly used in the process of forensic human identification. In order to automatize the identification, a method of extracting and comparing dental features from digital radiograms was developed by the creators of Automated Dental Identification System (ADIS). In this paper, a novel method of extracting teeth shapes from extraoral radiograms, known as orthopantomograms, is proposed. The method segments the image using the watershed algorithm and classifies every resulting region as belonging either to the tooth or the background. Example results obtained by means of the proposed method are also presented.

Dariusz Frejlichowski, Robert Wanat

Effects of JPEG XR Compression Settings on Iris Recognition Systems

JPEG XR is considered as a lossy sample data compression scheme in the context of iris recognition techniques. It is shown that apart from low-bitrate scenarios, JPEG XR is competitive to the current standard JPEG2000 while exhibiting significantly lower computational demands.

Kurt Horvath, Herbert Stögner, Andreas Uhl

A Recursive Sparse Blind Source Separation Method for Nonnegative and Correlated Data in NMR Spectroscopy

Motivated by the nuclear magnetic resonance (NMR) spectroscopy of biofluids (urine and blood serum), we present a recursive blind source separation (rBSS) method for nonnegative and correlated data. A major approach to non-negative BSS relies on a strict non-overlap condition (also known as the pixel purity assumption in hyper-spectral imaging) of source signals which is not always guaranteed in the NMR spectra of chemical compounds. A new dominant interval condition is proposed. Each source signal dominates some of the other source signals in a hierarchical manner. The rBSS method then reduces the BSS problem into a series of sub-BSS problems by a combination of data clustering, linear programming, and successive elimination of variables. In each sub-BSS problem, an ℓ

1

minimization problem is formulated for recovering the source signals in a sparse transformed domain. The method is substantiated by NMR data.

Yuanchang Sun, Jack Xin

Human and Face Detection and Recognition

A Novel Face Recognition Approach under Illumination Variations Based on Local Binary Pattern

Local Binary Pattern (LBP) is one of the most important facial texture features in face recognition. In this paper, a novel approach based on the LBP is proposed for face recognition under different illumination conditions. The proposed approach applies Difference of Gaussian (DoG) filter in the logarithm domain of face images. LBPs are extracted from the filtered images and used for recognition. A novel measurement is also proposed to calculate distances between different LBPs. The experimental results on the Yale B and Extended Yale B prove superior performances of the proposed method and measurement compared to other existing methods and measurements.

Zhichao Lian, Meng Joo Er, Juekun Li

A New Pedestrian Detection Descriptor Based on the Use of Spatial Recurrences

Recent work on pedestrian detection has relied on the concept of local co-occurences of features to propose higher-order, richer descriptors. While this idea has proven to be benefitial for this detection task, it fails to properly account for a more general and/or holistic representation. In this paper, a novel, flexible, and modular descriptor is proposed which is based on the alternative concept of visual recurrence and, in particular, on a mathematically sound tool, the recurrence plot. The experimental work conducted provides evidence on the discriminatory power of the descriptor, with results comparable to recent similar approaches. Furthermore, since its degree of locality, its visual compactness, and the pair-wise feature similarity can be easily changed, it holds promise to account for characterizations of other descriptors, as well as for a range of accuracy-computational trade-offs for pedestrian detection and, possibly, also for other object detection problems.

Carlos Serra-Toro, V. Javier Traver

Facial Expression Recognition Using Nonrigid Motion Parameters and Shape-from-Shading

This paper presents a 3D motion based approach to facial expression recognition from video sequences. A non-Lambertian shape-from-shading (SFS) framework is used to recover 3D facial surfaces. The SFS technique avoids heavy computational requirements normally encountered by using a 3D face model. Then, a parametric motion model and optical flow are employed to obtain the nonrigid motion parameters of surface patches. At first, we obtain uniform motion parameters under the assumptions that motion due to change in expressions is temporally consistent. Then we relax the uniform motion constraint, and obtain temporal motion parameters. The two types of motion parameters are used to train and classify using Adaboost and HMM-based classifier. Experimental results show that temporal motion parameters perform much better than uniform motion parameters, and can be used to efficiently recognize facial expression.

Fang Liu, Edwin R. Hancock, William A. P. Smith

TIR/VIS Correlation for Liveness Detection in Face Recognition

Face liveness detection in visible light (VIS) spectrum is facing great challenges. Beyond visible light spectrum, thermal IR (TIR) has intrinsic live signal itself. In this paper, we present a novel liveness detection approach based on thermal IR spectrum. Live face is modeled in the cross-modality of thermal IR and visible light spectrum. In our model, canonical correlation analysis between visible and thermal IR face is exploited. The correlation of different face parts is also investigated to illustrate more correlative features and be helpful to improve live face detection ability. An extensive set of liveness detection experiments are presented to show effectiveness of our approach and other correlation methods are also tested for comparison.

Lin Sun, WaiBin Huang, MingHui Wu

Person Localization and Soft Authentication Using an Infrared Ceiling Sensor Network

Person localization and identification are indispensable to provide various personalized services in an intelligent environment. We propose a novel method for person localization and developed a system for identifying up to ten persons in an office room to realize soft authentication. Our system consists of forty-three infrared ceiling sensors with low cost and easy installation. In experiments, the average distance error of person localization was 31.6cm that is an acceptable error for sensors with 1.5m distance to each other. We also confirmed that walking path and speed gives sufficient information for authenticating the user. Through the experiments, we obtained the correct recognition rates of 98%, 95% and 86% for any pair, any three people and all ten people to identify individuals.

Shuai Tao, Mineichi Kudo, Hidetoshi Nonaka, Jun Toyama

Document Analysis

Categorization of Camera Captured Documents Based on Logo Identification

In this paper, we present a methodology to categorize camera captured documents into pre-defined logo classes. Unlike scanned documents, camera captured documents suffer from intensity variations, partial occlusions, cluttering, and large scale variations. Furthermore, the existence of non-uniform folds and the lack of document being flat make this task more challenging. We present the selection of robust local features and the corresponding parameters by comparisons among SIFT, SURF, MSER, Hessian-affine, and Harris-affine. We evaluate the system not only with respect to amount of space required to store the local features information but also with respect to categorization accuracy. Moreover, the system handles the identification of multiple logos on the document at the same time. Experimental results on a challenging set of real images demonstrate the efficiency of our approach.

Venkata Gopal Edupuganti, Frank Y. Shih, Suryaprakash Kompalli

Multiple Line Skew Estimation of Handwritten Images of Documents Based on a Visual Perception Approach

This paper introduces Viskew: a new algorithm to estimate the skew of text lines in digitized documents. The algorithm is based on a visual perception approach where transition maps and morphological operators simulate human visual perception of documents. The algorithm was tested in a set of 19,500 synthetic text line images and 400 images of documents with multiple skew angles. The skew angles for the synthetic dataset are known and our algorithm achieved the lowest mean square error in average when compared with two other algorithms.

Carlos A. B. Mello, Ángel Sánchez, George D. C. Cavalcanti

Applications

Space Variant Representations for Mobile Platform Vision Applications

The log-polar space variant representation, motivated by biological vision, has been widely studied in the literature. Its data reduction and invariance properties made it useful in many vision applications. However, due to its nature, it fails in preserving features in the periphery. In the current work, as an attempt to overcome this problem, we propose a novel space-variant representation. It is evaluated and proved to be better than the log-polar representation in preserving the peripheral information, crucial for on-board mobile vision applications. The evaluation is performed by comparing log-polar and the proposed representation once they are used for estimating dense optical flow.

Naveen Onkarappa, Angel D. Sappa

JBoost Optimization of Color Detectors for Autonomous Underwater Vehicle Navigation

In the world of autonomous underwater vehicles (AUV) the prominent form of sensing is sonar due to cloudy water conditions and dispersion of light. Although underwater conditions are highly suitable for sonar, this does not mean that optical sensors should be completely ignored. There are situations where visibility is high, such as in calm waters, and where light dispersion is not significant, such as in shallow water or near the surface. In addition, even when visibility is low, once a certain proximity to an object exists, visibility can increase. The focus of this paper is this gap in capability for AUVs, with an emphasis on computer-aided detection through classifier optimization via machine learning. This paper describes the development of color-based classification algorithm and its application as a cost-sensitive alternative for navigation on the small Stingray AUV.

Christopher Barngrover, Serge Belongie, Ryan Kastner

Combining Structure and Appearance for Anomaly Detection in Wire Ropes

We present a new approach for anomaly detection in the context of visual surface inspection. In contrast to existing, purely appearance-based approaches, we explicitly integrate information about the object geometry. The method is tested using the example of wire rope inspection as this is a very challenging problem.

A perfectly regular 3d model of the rope is aligned with a sequence of 2d rope images to establish a direct connection between object geometry and observed rope appearance. The surface appearance can be physically explained by the rendering equation. Without a need for knowledge about the illumination setting or the reflectance properties of the material we are able to sample the rendering equation. This results in a probabilistic appearance model. The density serves as description for normal surface variations and allows a robust localization of rope surface defects.

We evaluate our approach on real-world data from real ropeways. The accuracy of our approach is comparable to that of a human expert and outperforms all other existing approaches. It has an accuracy of 95% and a low false-alarm-rate of 1.5%, whereupon no single defect is missed.

Esther-Sabrina Wacker, Joachim Denzler

3D Cascade of Classifiers for Open and Closed Eye Detection in Driver Distraction Monitoring

Eye status detection and localization is a fundamental step for driver awareness detection. The efficiency of any learning-based object detection method highly depends on the training dataset as well as learning parameters. The research develops optimum values of Haar-training parameters to create a nested cascade of classifiers for real-time eye status detection. The detectors can detect eye-status of open, closed, or diverted not only from frontal faces but also for rotated or tilted head poses. We discuss the unique features of our robust training database that significantly influenced the detection performance. The system has been practically implemented and tested in real-world and real-time processing with satisfactory results on determining driver’s level of vigilance.

Mahdi Rezaei, Reinhard Klette

Non–destructive Detection of Hollow Heart in Potatoes Using Hyperspectral Imaging

We present a new method to detect the presence of the

hollow heart

, an internal disorder of the potato tubers, using hyperspectral imaging technology in the infrared region. A set of 468 hyperspectral cubes of images has been acquired from Agria variety potatoes, that have been cut later to check the presence of a hollow heart. We developed several experiments to recognize hollow heart potatoes using different Artificial Intelligence and Image Processing techniques. The results show that Support Vector Machines (SVM) achieve an accuracy of 89.1% of correct classification. This is an automatic and non-destructive approach, and it could be integrated into other machine vision developments.

Angel Dacal-Nieto, Arno Formella, Pilar Carrión, Esteban Vazquez-Fernandez, Manuel Fernández-Delgado

Dice Recognition in Uncontrolled Illumination Conditions by Local Invariant Features

A system is proposed for the recognition of the number of the dots on dice in general table game settings. Different from previous dice recognition systems which use a single top-view camera and work only under controlled illumination, the proposed one uses multiple cameras and works for uncontrolled illumination. Under controlled illumination edges are the prominent features considered by most approaches. But strong specular reflection, often observed in uncontrolled illumination, paralyzes the approaches solely based on edges. The proposed system exploits the local invariant features robust to illumination variation and good for building homographies across multi-views. The homographies are used to enhance coplanar features and weaken non-coplanar features, giving a way to segment the top faces of the dice and make up the features ruined by possible specular reflection. To identify the dots on the segmented top faces, an MSER detector is applied for its consistency rendering local interest regions across large illumination variation. Experiments show that the proposed system can achieve a superb recognition rate in various uncontrolled illumination conditions.

Gee-Sern Hsu, Hsiao-Chia Peng, Chyi-Yeu Lin, Pendry Alexandra

Specularity Detection Using Time-of-Flight Cameras

Time-of-flight (TOF) cameras are primarily used for range estimation by illuminating the scene through a TOF infrared source. However, additional background sources of illumination of the scene are also captured in the measurement process. This paper exploits conventional Lambertian and Phong’s illumination models, developed for 2D CCD image cameras, to propose a radiometric model for a generic TOF camera. The model is used as the basis for a novel specularity detection algorithm. The proposed model is experimentally verified using real data.

Faisal Mufti, Robert Mahony

Symmetry Computation in Repetitive Images Using Minimum-Variance Partitions

The symmetry computation has recently been recognized as a topic of interest in many different fields of computer vision and image analysis, which still remains as an open problem. In this work we propose an unified method to compute image symmetries based on finding the minimum-variance partitions of the image that best describe its repetitive nature. We then use a statistical measurement of these partitions as symmetry score. The principal idea is that the same measurement can be used to score symmetries (rotation, reflection, and glide reflection). Finally, a feature vector composed from these symmetry values is used to classify the whole image according to a symmetry group. An increase in the success rate, compared to other reference methods, indicates the improved discriminative capabilities of the proposed symmetry features. Our experimental results improve the state of the art in wallpaper classification methods.

Manuel Agustí-Melchor, Angel Rodas-Jordá, José M. Valiente-González

3D Vision

Tensor Method for Constructing 3D Moment Invariants

A generalization from 2D to 3D of the tensor method for derivation of both affine invariants and rotation, translation and scaling (TRS) invariants is described. The method for generation of the 3D TRS invariants of higher orders is automated and experimentally tested.

Tomáš Suk, Jan Flusser

Multi-camera 3D Scanning with a Non-rigid and Space-Time Depth Super-Resolution Capability

3D imaging sensors for the acquisition of three dimensional faces have created, in recent years, a considerable degree of interest for a number of applications. Structured light camera/projector systems are often used to overcome the relatively uniform appearance of skin. In this paper, we propose a 3D acquisition solution with a 3D space-time non-rigid super-resolution capability, using three calibrated cameras coupled with a non calibrated projector device, which is particularly suited to 3D face scanning, i.e. rapid, easily movable and robust to ambient lighting conditions. The proposed solution is a hybrid stereovision and phase-shifting approach, using two shifted patterns and a texture image, which not only takes advantage of the assets of stereovision and structured light but also overcomes their weaknesses. The super-resolution process is performed to deal with 3D artifacts and to complete the 3D scanned view in the presence of small non-rigid deformations as facial expressions. The experimental results demonstrate the effectiveness of the proposed approach.

Karima Ouji, Mohsen Ardabilian, Liming Chen, Faouzi Ghorbel

A New Algorithm for 3D Shape Recognition by Means of the 2D Point Distance Histogram

A new algorithm for the recognition of three-dimensional objects is proposed in this paper. The algorithm is based on the rendering of several 2D projections of a 3D model, from various positions of the camera. Similarly to the proposition given in [1], the vertices of the dodecahedron enclosing the processed model contain the cameras for this purpose. The obtained projections are stored in bitmaps and the Point Distance Histogram for the description of the planar shapes extracted from them is applied. The obtained histograms represent a 3D model. The experiments performed have confirmed the high efficiency of the proposed algorithm. It outperformed five other algorithms for the representation of 3D shapes.

Dariusz Frejlichowski

Wide Range Face Pose Estimation by Modelling the 3D Arrangement of Robustly Detectable Sub-parts

A highly accurate solution for the estimation of face poses over a wide range of 180 degree is presented. The result has been achieved by modeling the 3D arrangement of 15 facial features and its mapping to the image plane for different poses. A voting scheme is used to compute the mapping for a given image in a bottom-up procedure. The voting is based on a robust classification of the appearance of the sub-parts. However, equal importance must be ascribed to the extension of the annotation scheme of the Feret data base, also including the correction of existing misannotations.

Thiemo Wiedemeyer, Martin Stommel, Otthein Herzog

Image Restoration

Single Image Restoration of Outdoor Scenes

We present a novel strategy to restore outdoor images degraded by the atmospheric phenomena such as haze or fog. Since both the depth map of the scene and the airlight constant are unknown, this problem is mathematically ill-posed. Firstly, we present a straightforward approach that is able to estimate accurately the airlight constant by searching the regions with the highest intensity. Afterwards, based on a graphical Markov random field (MRF) model, we introduce a robust optimization framework that is able to transport the local minima over large neighborhoods while smoothing the transmission map but also preserving the important depth discontinuities of the estimated depth. The method has been tested extensively for real outdoor images degraded by haze or fog. The comparative results with the existing state-of-the-art techniques demonstrate the advantage of our approach.

Codruta Orniana Ancuti, Cosmin Ancuti, Philippe Bekaert

Exploiting Image Collections for Recovering Photometric Properties

We address the problem of jointly estimating the scene illumination, the radiometric camera calibration and the reflectance properties of an object using a set of images from a community photo collection. The highly ill-posed nature of this problem is circumvented by using appropriate representations of illumination, an empirical model for the nonlinear function that relates image irradiance with intensity values and additional assumptions on the surface reflectance properties. Using a 3D model recovered from an unstructured set of images, we estimate the coefficients that represent the illumination for each image using a frequency framework. For each image, we also compute the corresponding camera response function. Additionally, we calculate a simple model for the reflectance properties of the 3D model. A robust non-linear optimization is proposed exploiting the high sparsity present in the problem.

Mauricio Diaz, Peter Sturm

Restoration

Human Visual System for Complexity Reduction of Image and Video Restoration

This paper focuses on the use of Human Visual System (HVS) rules for reducing the complexity of image and video restoration algorithms. Specifically, a fast HVS based block classification is proposed for distinguishing image blocks where restoration is necessary from the ones where it is useless. Some experimental results on standard test images and video sequences show the capability of the proposed method in reducing the computing time of de-noising algorithms, preserving the visual quality of the restored sequences.

Vittoria Bruni, Daniela De Canditiis, Domenico Vitulano

Optimal Image Restoration Using HVS-Based Rate-Distortion Curves

This paper proves that the Jensen-Shannon Divergence (JSD) is a good information theory measure of the visibility cost of a degraded region in a pictorial scene. Hence, it can be combined with Michelson contrast for building a visual rate-distortion curve. The latter allows to optimize parameters of restoration algorithms. Some results on both synthetic and real data show the potential of the proposed approach.

Vittoria Bruni, Elisa Rossi, Domenico Vitulano

Natural Computation for Digital Imagery

A Parallel Implementation of the Thresholding Problem by Using Tissue-Like P Systems

In this paper we present a parallel algorithm to solve the thresholding problem by using Membrane Computing techniques. This bio-inspired algorithm has been implemented in a novel device architecture called CUDA

TM

, (Compute Unified Device Architecture). We present some examples, compare the obtained time and present some research lines for the future.

Francisco Peña-Cantillana, Daniel Díaz-Pernil, Ainhoa Berciano, Miguel Angel Gutiérrez-Naranjo

P Systems in Stereo Matching

Designing parallel versions of sequential algorithms has attracted renewed attention, due to recent hardware advances, including various general-purpose multi-core, multiple core and many-core processors, as well as special-purpose FPGA implementations. P systems consist of networks of autonomous cells, such that each cell transforms its input signals in accord with symbol-rewriting rules and feeds the output results into its immediate neighbours. Inherent intra- and inter-cell parallelism make the P systems a prospective theoretical testbed for designing parallel algorithms. This paper discusses capabilities of P systems to implement the symmetric dynamic programming algorithm for stereo matching, with due account to binocular or monocular visibility of 3D surface points.

Georgy Gimel’farb, Radu Nicolescu, Sharvin Ragavan

Functional Brain Mapping by Methods of Evolutionary Natural Selection

We used genetic algorithms to detect active voxels in the human brain imaged using functional magnetic resonance images. The method that we called EVOX deploys multivoxel pattern analysis to find the fitness of most active voxels. The fitness function is a classifier that works in a leave-one-run-out cross-validation. In each generation, the fitness value is calculated as the average performance over all cross-validation folds. Experimental results using functional magnetic resonance images collected while humans (subjects) were responding to attention visual stimuli showed certain situations that EVOX has could be useful compared to univariate ANOVA (analysis of variance) and searchlight methods. EVOX is an effective multivoxel evolutionary tool that can be used to tell where in the brain patterns responding to stimuli are.

Mohammed Sadeq Al-Rawi, João Paulo Silva Cunha

Interactive Classification of Remote Sensing Images by Using Optimum-Path Forest and Genetic Programming

The use of remote sensing images as a source of information in agribusiness applications is very common. In those applications, it is fundamental to know how the space occupation is. However, identification and recognition of crop regions in remote sensing images are not trivial tasks yet. Although there are automatic methods proposed to that, users very often prefer to identify regions manually. That happens because these methods are usually developed to solve specific problems, or, when they are of general purpose, they do not yield satisfying results. This work presents a new interactive approach based on relevance feedback to recognize regions of remote sensing. Relevance feedback is a technique used in content-based image retrieval (CBIR) tasks. Its objective is to aggregate user preferences to the search process. The proposed solution combines the Optimum-Path Forest (OPF) classifier with composite descriptors obtained by a Genetic Programming (GP) framework. The new approach has presented good results with respect to the identification of pasture and coffee crops, overcoming the results obtained by a recently proposed method and the traditional Maximimun Likelihood algorithm.

Jefersson Alex dos Santos, André Tavares da Silva, Ricardo da Silva Torres, Alexandre Xavier Falcão, Léo P. Magalhães, Rubens A. C. Lamparelli

A Dynamic Niching Quantum Genetic Algorithm for Automatic Evolution of Clusters

This paper proposes a novel genetic clustering algorithm, called a dynamic niching quantum genetic clustering algorithm (DNQGA), which is based on the concept and principles of quantum computing, such as the qubits and superposition of states. Instead of binary representation, a boundary-coded chromosome is used. Moreover, a dynamic identification of the niches is performed at each generation to automatically evolve the optimal number of clusters as well as the cluster centers of the data set. After getting the niches of the population, a Q-gate with adaptive selection of the angle for every niches is introduced as a variation operator to drive individuals toward better solutions. Several data sets are used to demonstrate its superiority. The experimental results show that DNQGA clustering algorithm has high performance, effectiveness and flexibility.

Dongxia Chang, Yao Zhao

Image and Video Processing

Spatio-Temporal Fuzzy FDPA Filter

An overview of the new spatio-temporal video filtering technique was presented in this paper. The extension of standard techniques based on temporal Gaussian combined with Fast Digital Paths Approach [9] with fuzzy similarity function was presented. Presented technique provides excellent noise suppression ability especially for low light sequences.

Marek Szczepański

Graph Aggregation Based Image Modeling and Indexing for Video Annotation

With the rapid growth of video multimedia databases and the lack of textual descriptions for many of them, video annotation became a highly desired task. Conventional systems try to annotate a video query by simply finding its most similar videos in the database. Although the video annotation problem has been tackled in the last decade, no attention has been paid to the problem of assembling video keyframes in a sensed way to provide an answer of the given video query when no single candidate video turns out to be similar to the query. In this paper, we introduce a graph based image modeling and indexing system for video annotation. Our system is able to improve the video annotation task by assembling a set of graphs representing different keyframes of different videos, to compose the video query. The experimental results demonstrate the effectiveness of our system to annotate videos that are not possibly annotated by classical approaches.

Najib Ben Aoun, Haytham Elghazel, Mohand-Said Hacid, Chokri Ben Amar

Violence Detection in Video Using Computer Vision Techniques

Whereas the action recognition community has focused mostly on detecting simple actions like clapping, walking or jogging, the detection of fights or in general aggressive behaviors has been comparatively less studied. Such capability may be extremely useful in some video surveillance scenarios like in prisons, psychiatric or elderly centers or even in camera phones. After an analysis of previous approaches we test the well-known Bag-of-Words framework used for action recognition in the specific problem of fight detection, along with two of the best action descriptors currently available: STIP and MoSIFT. For the purpose of evaluation and to foster research on violence detection in video we introduce a new video database containing 1000 sequences divided in two groups: fights and non-fights. Experiments on this database and another one with fights from action movies show that fights can be detected with near 90% accuracy.

Enrique Bermejo Nievas, Oscar Deniz Suarez, Gloria Bueno García, Rahul Sukthankar

Speckle Denoising through Local Rényi Entropy Smoothing

Quality enhancement of radar images is highly related to speckle noise reduction. There are plenty of such techniques that have been developed by different authors. However, a definitive method has not been already attained. Filtering methods are popular to reduce speckle noise. This paper introduces a new method based on filtering a smoothed local pseudo-Wigner distribution using a local Rényi entropy measure. Results are compared to other well-known noise reduction filtering methods for artificially degraded speckle images and real world image examples. Experimental results confirm that this method outperforms other classical speckle denoising methods.

Salvador Gabarda, Gabriel Cristóbal

Multiresolution Optical Flow Computation of Spherical Images

As an application of image analysis on Riemannian manifolds, we develop an accurate algorithm for the computation of optical flow of omni-directional images. To guarantee the accuracy and stability of image processing for spherical images, we introduce the Gaussian pyramid transform, that is, we develop variational optical flow computation with pyramid-transform-based multiresolution analysis for spherical images.

Yoshihiko Mochizuki, Atsushi Imiya

An Improved SalBayes Model with GMM

SalBayes is an efficient visual attention model. We describe an improved SalBayes model with Gaussian Mixture Model (GMM) which can fit the object with various transformations better. The improved model learns the probability of an object’s visual appearance within a particular feature map, and the Probability Distribution Function (PDF) is modeled using a Mixture Gaussian distribution for each individual feature. The results tested on Amsterdam Library of Object Images (ALOI) shows the better performance than that with the original model.

Hairu Guo, Xiaojie Wang, Yixin Zhong, Song Bi

Exploring Alternative Spatial and Temporal Dense Representations for Action Recognition

The automatic analysis of video sequences with individuals performing some actions is currently receiving much attention in the computer vision community. Among the different visual features chosen to tackle the problem of action recognition, local histogram within a region of interest is proven to be very effective. However, we study for the first time whether spatiograms, which are histograms enriched with per-bin spatial information, can be alternatively effective for action characterization. On the other hand, the temporal information of these histograms is usually collapsed by simple averaging of the histograms, which basically ignores the dynamics of the action. In contrast, this paper explores a temporally holistic representation in the form of recurrence matrices which capture pair-wise spatiograms relationships on a frame-by-frame basis. Experimental results show that recurrence matrices are powerful for action classification, whereas spatiograms, in its current usage, are not.

Pau Agustí, V. Javier Traver, Manuel J. Marin-Jimenez, Filiberto Pla

Image Denoising Using Bilateral Filter in High Dimensional PCA-Space

This paper proposes a new noise filtering method inspired by Bilateral filter (BF), non-local means (NLM) filter and principal component analysis (PCA). The main idea here is to perform the BF in a multidimensional PCA-space using an anisotropic kernel. The filtered multidimensional signal is then transformed back onto the image spatial domain to yield the desired enhanced image. The proposed method is compared to the state-of-art. The obtained results are highly promising.

Quoc Bao Do, Azeddine Beghdadi, Marie Luong

Image Super Resolution Using Sparse Image and Singular Values as Priors

In this paper single image superresolution problem using sparse data representation is described. Image super-resolution is ill -posed inverse problem. Several methods have been proposed in the literature starting from simple interpolation techniques to learning based approach and under various regularization frame work. Recently many researchers have shown interest to super-resolve the image using sparse image representation. We slightly modified the procedure described by a similar work proposed recently. The modification suggested in the proposed approach is the method of dictionary training, feature extraction from the trained data base images and regularization. We have used singular values as prior for regularizing the ill-posed nature of the single image superresolution problem. Method of Optimal Directions algorithm (MOD) has been used in the proposed algorithm for obtaining high resolution and low resolution dictionaries from training image patches. Using the two dictionaries the given low resolution input image is super-resolved. The results of the proposed algorithm showed improvements in visual, PSNR, RMSE and SSIM metrics over other similar methods.

Subrahmanyam Ravishankar, Challapalle Nagadastagiri Reddy, Shikha Tripathi, K. V. V. Murthy

Improved Gaussian Mixture Model for the Task of Object Tracking

This paper presents various motion detection methods: temporal averaging (TA), Bayes decision rules (BDR), Gaussian mixture model (GMM), and improved Gaussian mixture model (iGMM). This last model is improved by adapting the number of selected Gaussian, detecting and removing shadows, handling stopped object by locally modifying the updating process. Then we compare these methods on specific cases, such as lighting changes and stopped objects. We further present four tracking methods. Finally, we test the two motion detection methods offering the best results on an object tracking task, in a traffic monitoring context, to evaluate these methods on outdoor sequences.

Ronan Sicre, Henri Nicolas

Driver’s Fatigue and Drowsiness Detection to Reduce Traffic Accidents on Road

This paper proposes a robust and nonintrusive system for monitoring driver’s fatigue and drowsiness in real time. The proposed scheme begins by extracting the face from the video frame using the Support Vector Machine (SVM) face detector. Then a new approach for eye and mouth state analysis -based on Circular Hough Transform (CHT)- is applied on eyes and mouth extracted regions. Our drowsiness analysis method aims to detect micro-sleep periods by identifying the iris using a novel method to characterize driver’s eye state. Fatigue analysis method based on yawning detection is also very important to prevent the driver before drowsiness. In order to identify yawning, we detect wide open mouth using the same proposed method of eye state analysis. The system was tested with different sequences recorded in various conditions and with different subjects. Some experimental results about the performance of the system are presented.

Nawal Alioua, Aouatif Amine, Mohammed Rziza, Driss Aboutajdine

Image Synthesis Based on Manifold Learning

A new methodology for image synthesis based on manifold learning is proposed. We employ a local analysis of the observations in a low-dimensional space computed by Locally Linear Embedding, and then we synthesize unknown images solving an inverse problem, which normally is ill-posed. We use some regularization procedures in order to prevent unstable solutions. Moreover, the Least Squares-Support Vector Regression (LS-SVR) method is used to estimate new samples in the embedding space. Furthermore, we also present a new methodology for multiple parameter choice in LS-SVR based on Generalized Cross-Validation. Our methodology is compared to a high-dimensional data interpolation method, and a similar approach that uses low-dimensional space representations to improve the input data analysis. We test the synthesis algorithm on databases that allow us to confirm visually the quality of the results. According to the experiments our method presents the lowest average relative errors with stable synthesis results.

Andrés Marino Álvarez-Meza, Juliana Valencia-Aguirre, Genaro Daza-Santacoloma, Carlos Daniel Acosta-Medina, Germán Castellanos-Domínguez

Hierarchical Foreground Detection in Dynamic Background

Foreground detection in dynamic background is one of challenging problems in many vision-based applications. In this paper, we propose a hierarchical foreground detection algorithm in the HSL color space. With the proposed algorithm, the experimental precision in five testing sequences reached to 56.46%, which was the best among compared four methods.

Guoliang Lu, Mineichi Kudo, Jun Toyama

Image Super-Resolution Based Wavelet Framework with Gradient Prior

A novel super-resolution approach is presented. It is based on the local Lipschitz regularity of wavelet transform along scales to predict the new detailed coefficients and their gradients from the horizontal, vertical and diagonal directions after extrapolation. They form inputs of a synthesis wavelet filter to perform the undecimated inverse wavelet transform without registration error, to obtain the output image and its gradient map respectively. Finally, the gradient descent algorithm is applied to the output image combined with the newly generated gradient map. Experiments show that our method improves in both the objective evaluation of peak signal-to-noise ratio (PSNR) with the greatest improvement of 1.32 dB and the average of 0.56 dB, and the subjective evaluation in the edge pixels and even in the texture regions, compared to the “bicubic” interpolation algorithm.

Yan Xu, Xueming M. Li, Chingyi Y. Suen

Are Performance Differences of Interest Operators Statistically Significant?

The differences in performance of a range of interest operators are examined in a null hypothesis framework using McNemar’s test on a widely-used database of images, to ascertain whether these apparent differences are statistically significant. It is found that some performance differences are indeed statistically significant, though most of them are at a fairly low level of confidence,

i.e.

with about a 1-in-20 chance that the results could be due to features of the evaluation database. A new evaluation measure i.e. accurate homography estimation is used to characterize the performance of feature extraction algorithms.Results suggest that operators employing longer descriptors are more reliable.

Nadia Kanwal, Shoaib Ehsan, Adrian F. Clark

Calibration

Accurate and Practical Calibration of a Depth and Color Camera Pair

We present an algorithm that simultaneously calibrates a color camera, a depth camera, and the relative pose between them. The method is designed to have three key features that no other available algorithm currently has: accurate, practical, applicable to a wide range of sensors. The method requires only a planar surface to be imaged from various poses. The calibration does not use color or depth discontinuities in the depth image which makes it flexible and robust to noise. We perform experiments with particular depth sensor and achieve the same accuracy as the propietary calibration procedure of the manufacturer.

Daniel Herrera C., Juho Kannala, Janne Heikkilä

Color and Texture

Contourlet-Based Texture Retrieval Using a Mixture of Generalized Gaussian Distributions

We address the texture retrieval problem using contourlet-based statistical representation. We propose a new contourlet distribution modelling using finite mixtures of generalized Gaussian distributions (MoGG). The MoGG allows to capture a wide range of contourlet histogram shapes, which provides better description and discrimination of texture than using single probability density functions (pdfs). We propose a model similarity measure based on Kullback-Leibler divergence (KLD) approximation using Monte-Carlo sampling methods. We show that our approach using a redundant contourlet transform yields better texture discrimination and retrieval results than using other methods of statistical-based wavelet/contourlet modelling.

Mohand Saïd Allili, Nadia Baaziz

Evaluation of Histogram-Based Similarity Functions for Different Color Spaces

In this paper we evaluate similarity functions for histograms such as chi-square and Bhattacharyya distance for different color spaces such as RGB or L*a*b*. Our main contribution is to show the performance of these histogram-based similarity functions combined with several color spaces. The evaluation is done on image sequences of the PETS 2009 dataset, where a sequence of frames is used to compute the histograms of three different persons in the scene. One of the most popular applications where similarity functions can be used is tracking. Data association is done in multiple stages where the first stage is the computation of the similarity of objects between two consecutive frames. Our evaluation concentrates on this first stage, where we use histograms as data type to compare the objects with each other. In this paper we present a comprehensive evaluation on a dataset of segmented persons with all combinations of the used similarity functions and color spaces.

Andreas Zweng, Thomas Rittler, Martin Kampel

Color Contribution to Part-Based Person Detection in Different Types of Scenarios

Camera-based person detection is of paramount interest due to its potential applications. The task is difficult because the great variety of backgrounds (scenarios, illumination) in which persons are present, as well as their intra-class variability (pose, clothe, occlusion). In fact, the class

person

is one of the included in the popular PASCAL visual object classes (VOC) challenge. A breakthrough for this challenge, regarding

person

detection, is due to Felzenszwalb

et al

These authors proposed a part-based detector that relies on histograms of oriented gradients (HOG) and latent support vector machines (LatSVM) to learn a model of the whole human body and its constitutive parts, as well as their relative position. Since the approach of Felzenszwalb

et al

appeared new variants have been proposed, usually giving rise to more complex models. In this paper, we focus on an issue that has not attracted sufficient interest up to now. In particular, we refer to the fact that HOG is usually computed from RGB color space, but other possibilities exist and deserve the corresponding investigation. In this paper we challenge RGB space with the opponent color space (OPP), which is inspired in the human vision system. We will compute the HOG on top of OPP, then we train and test the part-based human classifier by Felzenszwalb

et al

. using PASCAL VOC challenge protocols and

person

database. Our experiments demonstrate that OPP outperforms RGB. We also investigate possible differences among types of scenarios: indoor, urban and countryside. Interestingly, our experiments suggest that the benefits of OPP with respect to RGB mainly come for indoor and countryside scenarios, those in which the human visual system was

designed

by evolution.

Muhammad Anwer Rao, David Vázquez, Antonio M. López

Content Adaptive Image Matching by Color-Entropy Segmentation and Inpainting

Image matching is a fundamental problem in computer vision. One of the well-known techniques is SIFT (scale-invariant feature transform). SIFT searches for and extracts robust features in hierarchical image scale spaces for object identification. However it often lacks efficiency as it identifies many insignificant features such as tree leaves and grass tips in a natural building image. We introduce a content adaptive image matching approach by preprocessing the image with a color-entropy based segmentation and harmonic inpainting. Natural structures such as tree leaves have both high entropy and distinguished color, and so such a combined measure can be both discriminative and robust. The harmonic inpainting smoothly interpolates the image functions over the tree regions and so blurrs and reduces the features and their unnecessary matching there. Numerical experiments on building images show around 10% improvement of the SIFT matching rate with 20% to 30% saving of the total CPU time.

Yuanchang Sun, Jack Xin

Face Image Enhancement Taking into Account Lighting Behavior on a Face

This paper presents a face image enhancement taking into account lighting behavior with a low computational cost. We already proposed the enhancement method using a 3D face model. It is however difficult to implement it in color imaging appliances due to the high computational cost. The newly proposed method decomposes color information of a face into three components, i.e., specularites, shadows and albedo by using a light reflection model. Spectral reflectance is recovered from the albedo, and improved by bringing it close to a predefined reference. By modifying only the reflectance, the synthesized images appear naturally enhanced, maintaining the original expression of human faces that is influenced by specularities and shadows. The proposed method reduces its processing time to one seventy-fifth of that of the method using a 3D face model. Experiments demonstrate that the proposed method is effective for imaging appliances in terms of computational cost and image quality.

Masato Tsukada, Chisato Funayama, Masatoshi Arai, Charles Dubout

Adaptive Matrices for Color Texture Classification

In this paper we introduce an integrative approach towards color texture classification learned by a supervised framework. Our approach is based on the Generalized Learning Vector Quantization (GLVQ), extended by an adaptive distance measure which is defined in the Fourier domain and 2D Gabor filters. We evaluate the proposed technique on a set of color texture images and compare results with those achieved by methods already existing in the literature. The features learned by GLVQ improve classification accuracy and they generalize much better for evaluation data previously unknown to the system.

Kerstin Bunte, Ioannis Giotis, Nicolai Petkov, Michael Biehl

Color Texture Classification Using Rao Distance between Multivariate Copula Based Models

This paper presents a new similarity measure based on Rao distance for color texture classification or retrieval. Textures are characterized by a joint model of complex wavelet coefficients. This model is based on a Gaussian Copula in order to consider the dependency between color components. Then, a closed form of Rao distance is computed to measure the difference between two Gaussian Copula based probabilty density functions on the corresponding manifold. Results in term of classification rates, show the effectiveness of the Rao geodesic distance when applied on the manifold of Gaussian Copula based probability distributions, in comparison with the Kullback-Leibler divergence.

Ahmed Drissi El Maliani, Mohammed El Hassouni, Nour-Eddine Lasmar, Yannick Berthoumieu, Driss Aboutajdine

Texture Analysis Based on Saddle Points-Based BEMD and LBP

In this paper, a new texture analysis method(EMDLBP) based on BEMD and LBP is proposed. Bidimensional empirical mode decomposition (BEMD) is a locally adaptive method and suitable for the analysis of nonlinear or nonstationary signals. The texture images can be decomposed to several BIMFs (Bidimensional intrinsic mode functions) by BEMD, which present some new characters of the images. In this paper, firstly, we added the saddle points as supporting points for interpolation to improve the original BEMD, and then the new BEMD method is used to decompose the image to components (BIMFs). After then, the Local Binary Pattern (LBP) method is used to detect the feature from the BIMFs. Experiments shown the texture image recognition rate based on our method is better than other LBP-based methods.

JianJia Pan, YuanYan Tang

A Robust Approach to Detect Tampering by Exploring Correlation Patterns

Exposing digital forgeries by detecting local correlation patterns of images has become an important kind of approach among many others to establish the integrity of digital visual content. However, this kind of method is sensitive to JPEG compression, since compression attenuates the characteristics of local correlation pattern introduced by color filter array (CFA) interpolation. Rather than concentrating on the differences between image textures, we calculate the posterior probability map of CFA interpolation with compression related Gaussian model. Thus our approach will automatically adapt to compression. Experimental results on 1000 tampered images show validity and efficiency of the proposed method.

Lu Li, Jianru Xue, Xiaofeng Wang, Lihua Tian

Tracking and Stereo Vision

Robust Signal Generation and Analysis of Rat Embryonic Heart Rate in Vitro Using Laplacian Eigenmaps and Empirical Mode Decomposition

To develop an accurate and suitable method for measuring the embryonic heart rate

in vitro

, a system combining Laplacian eigenmaps and empirical mode decomposition has been proposed. The proposed method assess the heart activity in two steps;

signal generation

and

heart signal analysis

. Signal generation is achieved by Laplacian eigenmaps (LEM) in conjunction with correlation co-efficient, while the signal analysis of the heart motion has been performed by the modified empirical mode decomposition (EMD). LEM helps to find the template for the atrium and the ventricle respectively, whereas EMD helps to find the non-linear trend term without defining any regression model. The proposed method also removes the motion artifacts produced due to the the non-rigid deformation in the shape of the embryo, the noise induced during the data acquisition, and the higher order harmonics. To check the authenticity of the proposed method, 151 videos have been investigated. Experimental results demonstrate the superiority of the proposed method in comparison to three recent methods.

Muhammad Khalid Khan Niazi, Muhammad Talal Ibrahim, Mats F. Nilsson, Anna-Carin Sköld, Ling Guan, Ingela Nyström

Radial Symmetry Guided Particle Filter for Robust Iris Tracking

While pupil tracking under active infrared illumination is now relatively well-established, current iris tracking algorithms often fail due to several non-ideal conditions. In this paper, we present a novel approach for tracking the iris. We introduce a radial symmetry detector into the proposal distribution to guide the particles towards the high probability region. Experimental results demonstrate the ability of the proposed particle filter to robustly track the iris in challenging conditions, such as complex dynamics. Compared to some previous methods, our iris tracker is also able to automatically recover from failure.

Francis Martinez, Andrea Carbone, Edwige Pissaloux

Spatio-Temporal Stereo Disparity Integration

Using image sequences as input for vision-based algorithms allows the possibility of merging information from previous images into the analysis of the current image. In the context of video-based driver assistance systems, such temporal analysis can lead to the improvement of depth estimation of visible objects. This paper presents a Kalman filter-based approach that focuses on the reduction of uncertainty in disparity maps of image sequences. For each pixel in the current disparity map, we incorporate disparity data from neighbourhoods of corresponding pixels in the immediate previous and the current image frame. Similar approaches have been considered before that also use disparity information from previous images, but without complementing the analysis with data from neighbouring pixels.

Sandino Morales, Reinhard Klette

Refractive Index Estimation Using Polarisation and Photometric Stereo

This paper describes a novel approach to the estimation of refractive indices of surfaces using polarisation information. We use a moments estimation method for computing the polarisation image components from intensity images obtained using multiple polariser angles. This yields estimates of the mean-intensity, polarisation and phase at each pixel location.The surface normals are estimated using the photometric stereo. Using the Fresnel theory at each pixel we estimate the refractive index of the surface from the zenith angle of the surface normal and the measured polarisation. The method has been applied to determine the variations in paintings, human skin refractive indices and also for inspecting fruit surfaces. To test the effectiveness of the method, we coat a variety of objects with a layer of transparent liquid of known refractive index. Experiments on naturally occurring surfaces (e.g. human skin and fruits) and manufactured objects such as a plastic balls and paintings illustrate the effectiveness of this method in estimating refractive indices.

Gule Saman, Edwin R. Hancock

3D Gestural Interaction for Stereoscopic Visualization on Mobile Devices

Number of mobile devices such as smart phones or Tablet PCs has been dramatically increased over the recent years. New mobile devices are equipped with integrated cameras and large displays which make the interaction with device more efficient. Although most of the previous works on interaction between humans and mobile devices are based on 2D touch-screen displays, camera-based interaction opens a new way to manipulate in 3D space behind the device in the camera’s field of view. In this paper, our gestural interaction heavily relies on particular patterns from local orientation of the image called

Rotational Symmetries

. This approach is based on finding the most suitable pattern from a large set of rotational symmetries of different orders which ensures a reliable detector for hand gesture. Consequently, gesture detection and tracking can be hired as an efficient tool for 3D manipulation in various applications in computer vision and augmented reality. The final output will be rendered into color anaglyphs for 3D visualization. Depending on the coding technology different low cost 3D glasses will be used for viewers.

Shahrouz Yousefi, Farid Abedan Kondori, Haibo Li

Statistical Tuning of Adaptive-Weight Depth Map Algorithm

In depth map generation, the settings of the algorithm parameters to yield an accurate disparity estimation are usually chosen empirically or based on unplanned experiments. A systematic statistical approach including classical and exploratory data analyses on over 14000 images to measure the relative influence of the parameters allows their tuning based on the number of

bad_pixels

. Our approach is systematic in the sense that the heuristics used for parameter tuning are supported by formal statistical methods. The implemented methodology improves the performance of dense depth map algorithms. As a result of the statistical based tuning, the algorithm improves from 16.78% to 14.48%

bad_pixels

rising 7 spots as per the Middlebury Stereo Evaluation Ranking Table. The performance is measured based on the distance of the algorithm results vs. the Ground Truth by Middlebury. Future work aims to achieve the tuning by using significantly smaller data sets on fractional factorial and surface-response designs of experiments.

Alejandro Hoyos, John Congote, Iñigo Barandiaran, Diego Acosta, Oscar Ruiz

Backmatter

Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!

Bildnachweise