(1) Overview

Introduction

Digital particle image velocimetry (DPIV) is a common technique for non-intrusive, quantitative and qualitative flow visualization. A number of research articles deals with the implementation and optimization of the DPIV technique [, , , , , , , ]. Here, a GUI-based open-source tool (PIVlab) for DPIV analyses in MATLAB (MathWorks, Natick, Massachusetts) is presented. The tool takes advantage of several built-in MATLAB features, and eases subsequent data processing by providing a close link to the popular MATLAB user interface.

In DPIV, the motion of a fluid (either gaseous or liquid) is visualized by illuminating a thin sheet of fluid containing reflective and neutrally buoyant tracer particles. A digital image sensor is positioned parallel to the illuminated sheet, capturing the movement of the particles (see Figure 1). In most DPIV analyses, two images (A and B) of the illuminated plane are captured at t0 and t0t. Velocities in the sheet can hence be derived from Δt and the distance that the particles travelled from image A to B (particle displacement). In DPIV, the particle displacement is calculated for groups of particles by evaluating the cross-correlation of many small sub-images (interrogation areas). The correlation yields the most probable displacement for a group of particles travelling on a straight line between image A and image B [].

Figure 1 

Principle of DPIV: A laser sheet illuminates the particles contained in the fluid. A high-speed camera records the displacement of the particle pattern.

Implementation and architecture

PIVlab is programmed in MATLAB and additionally requires the image processing toolbox to run. The main file is ‘PIVlab_GUI.m’. It contains all GUI-related program parts and most of the functions that are accessible from the GUI. The most important functions for performing a PIV analysis (pre-processing: ‘PIVlab_preproc.m’; cross-correlation: ‘piv_FFTmulti.m’ and ‘piv_DCC.m’) are also accessible from the command line (see example in ‘PIVlab_commandline.m’) when the user desires to automate the whole process or to include it in another application.

A DPIV analysis typically consists of three main steps (image pre-processing, image evaluation, post-processing, see Figure 2). All of these steps are accessible from the GUI of PIVlab. The workflow is menu-based, starting at the left with image input and pre-processing options, and then continuing to the right of the menu (image evaluation / PIV analysis, post-processing, data exploration). This workflow is demonstrated in tutorials and screen capture videos that can be found on the project website.

Figure 2 

DPIV analyses in PIVlab. Overview of the workflow and the implemented features that are presented in the next sections.

The following section will give an overview of the relevant features and techniques that are accessible in PIVlab:

Image pre-processing

One common approach to improve the measurement quality is the enhancement of images before the actual image correlation takes place [, ]. This section presents a selected number of pre-processing techniques that are implemented in PIVlab (see Figure 3 for examples).

Figure 3 

The effect of several pre-processing techniques, see text for a description.

Histogram equalization

Contrast limited adaptive histogram equalization (CLAHE) was developed to increase the readability of image data in medical imaging []. CLAHE operates on small regions (tiles) of the image: In every tile, the most frequent intensities of the image histogram are spread out to the full range of the data (from 0 to 255 in 8-bit images). Regions with low exposure and regions with high exposure are therefore optimized independently. CLAHE significantly improves the probability of detecting valid vectors in experimental images by 4.7 ± 3.2% [].

Intensity highpass

Inhomogeneous lighting can cause low frequency background information which can be removed by applying a high-pass filter that mostly conserves the high frequency information from the particle illumination []. The filter emphasizes the particle information in the image, and suppresses any low frequency information in the images (including all low frequency displacement information).

Intensity capping

The DPIV method assumes that all particles within an interrogation window have the same motion. This will not be the case in reality, as perfectly uniform flow does hardly exist. Bright particles or bright spots within the area will contribute statistically more to the correlation signal, which may bias the result in non-uniform flows []. The intensity capping filter circumvents this problem. An upper limit of the greyscale intensity is selected, and all pixels that exceed the threshold are replaced by this upper limit. Therefore, unlike CLAHE, only a small amount of the pixel intensity information is adjusted, limiting the potential negative impact of image modifications []. Intensity capping improves the probability of detecting valid vectors in experimental images by 5.2 ± 2.5% [].

Image evaluation

The most sensitive part of a DPIV analysis is the cross correlation algorithm: Small sub images (interrogation areas) of an image pair are cross correlated to derive the most probable particle displacement in the interrogation areas. In essence, the cross-correlation is a statistical pattern matching technique that tries to find the particle pattern from interrogation area A back in interrogation area B. This statistical technique is implemented with the discrete cross correlation function []:

(1)
C(m,n)=ijA(i,j)B(im,jn)

where A and B are corresponding interrogation areas from image A and image B.

The location of the intensity peak in the resulting correlation matrix C gives the most probable displacement of the particles from A to B [].

There are two common approaches to solve equation 1: The most straightforward approach is to compute the correlation matrix in the spatial domain (see Figure 4 for a graphical representation of this correlation). This approach is either called direct cross correlation [], particle image pattern matching [], or convolution filtering [].

Figure 4 

Calculation of the correlation matrix using DCC as it is performed in MATLAB. Interrogation area A (size 4·4 pixels) is correlated with interrogation area B (size 8·8 pixels) and yields the correlation matrix (size 9·9 pixels). Adapted from [].

Another approach is to compute the correlation matrix in the frequency domain (discrete Fourier transform, DFT). The DFT is calculated using a fast Fourier transform []. Both approaches are implemented in PIVlab as they both have advantages as well as some drawbacks; these will be presented in short in the next sections. More details on the mathematical background of cross correlation can be found elsewhere (e. g. [, ]).

Direct cross correlation (DCC)

The direct cross correlation computes the correlation matrix in the spatial domain. In DCC, the interrogation areas A and B can have two different sizes []. When B is chosen twice as large as A, a particle displacement of up to half the size of A will not result in any loss of information and provide a reliable correlation matrix with low background noise (see Figure 4, top middle and Figure 5, top). DCC has been shown to create more accurate results than a standard DFT approach []. The disadvantage of DCC is the increased computational cost with respect to a standard DFT approach, especially with large interrogation areas [, , ].

Figure 5 

Correlation matrices of the DCC (top) and the DFT approach (bottom), interrogation area A is 64·64 pixels for both DCC and DFT. Area B is 128·128 pixels in DCC and 64·64 pixels in DFT. In DCC, the background noise does not increase up to a displacement of 32 pixels. In DFT, background noise immediately increases if the displacement is larger than 0 pixels. A displacement of more than 32 pixels will flip the correlation peak to the opposite side of the correlation matrix, and thus makes correct measurements impossible.

Discrete Fourier transform (DFT) and advanced DFT techniques

The potential drawback of DCC – the computational cost – can be resolved by calculating the correlation matrix in the frequency domain [] using FFT (see Figure 6A). This approach uses interrogation areas of identical size; therefore every particle displacement induces some loss of information, which can be noticed by the increasing amount of background noise in the correlation matrix (see Figure 5, bottom). This background noise complicates the detection of the intensity peak and decreases accuracy. It is therefore advisable to reduce the displacement to about one quarter of the interrogation area, in order to keep the background noise in the correlation matrix low [].

Figure 6 

A: Calculation speed of DCC in comparison with DFT (both calculations performed in MATLAB). For the FFT calculations, FFTW is used [] which accepts inputs of arbitrary size, but is slow for sizes that are prime or which have large prime factors (note the peaks in the graph). Generally, the DFT approach is much faster. B: Principle of the window deformation technique. Left: After the first interrogation pass, displacement information is present at nine positions inside the interrogation area. This information is interpolated to derive the displacement of every pixel of the interrogation area. Subsequently, interrogation area B is deformed, followed by several additional interrogation passes.

This disadvantage can be offset by running several passes of the DFT on the same dataset []: The integer result of the first analysis pass is used to offset the interrogation area in the following passes. The loss of information due to particle displacement is hence minimized. The interrogation grid can be refined with every pass [], yielding a high spatial resolution in the final vector map, together with a high dynamic velocity range and an optimal signal to noise ratio.

In real flows, the particle patterns will additionally be sheared and rotated; the non uniform particle motion will broaden the intensity peak in the correlation matrix and deteriorate the result. Several methods that account for the transformation of the interrogation areas have been proposed [, , ]. In PIVlab, the following procedure is implemented: The analysis is started with a regular DFT analysis. The first pass yields displacement information at the centre of each interrogation area. When the areas overlap one another by e. g. 50%, there is additional displacement information at the borders and corners of each interrogation area (nine positions in total, see Figure 6B, left). This information is used to calculate displacement information at every pixel of the interrogation areas via bilinear interpolation. Next, interrogation area B is deformed according to this displacement information (see Figure 6B, right) using either bilinear interpolation (faster) or spline interpolation (higher precision, but slower). The next interrogation pass correlates the original interrogation area A with the deformed area B. The remaining displacement information of each pass is accumulated. After a few passes, the displacement has been determined with high accuracy. Between the passes, but not after the final pass, the velocity information is smoothed and validated and missing information is interpolated. Data validation can be relatively strict, as any deteriorating effect of interpolation and smoothing will be corrected in the correlation of the following pass.

Peak finding

The choice of the peak finding technique is – similar to the choice of the cross correlation technique – another important factor for the accuracy of DPIV. The integer displacement of two interrogation areas can be determined straightforward from the location of the intensity peak of the correlation matrix. The location can be refined with sub-pixel precision using a range of methods [, , ]. The standard procedure is to fit a Gaussian function to the integer intensity distribution (see Figure 7). It is sufficient to use only the directly adjacent vertical and horizontal pixels (two times a 3-point fit = 2·3-point fit) and to evaluate the x and y axis separately. The peak of the fitted function is used to determine the particle displacement with sub-pixel precision.

Figure 7 

Principle of the Gaussian 2·3-point fit: Sub-pixel precision is achieved by fitting a one-dimensional Gaussian function (solid line) to the integer intensity distribution of the correlation matrix (dots) for both axes independently (only one axis is shown here).

If the particle displacement within an interrogation area is exposed to shear or rotation or if the images suffer from excessive motion blur, the displacement peak may have an elliptical shape []. In this case, a two-dimensional Gaussian function (9-point fit) has a better performance []. The added value of using a two-dimensional Gaussian function is more pronounced in non-deforming methods, such as DCC and single pass DFT. Both peak finding algorithms are implemented in PIVlab.

Post-processing

Data validation

Post processing of DPIV data is generally required to obtain reliable results []. A basic method to filter outliers in PIVlab is to choose limits for acceptable velocities manually. Velocity thresholds can also be determined semi automatically by comparing each velocity component with a lower threshold and an upper threshold (tlower and tupper):

(2)
tlower=u¯nσu

(3)
tupper=u¯+nσu

where u = mean velocity; σu = standard deviation of u.

The user defined value of n determines the strictness of this filter. This filter works very well in practice, as it adapts to some extent to the nature of the flow.

A more universal outlier detection method that automatically adapts to local flow situations is the normalized median test [] (or local median filter). The filter evaluates the velocity fluctuation with respect to the median in a 3·3 neighbourhood around a central vector. The median of this fluctuation is then used as normalization for a more classical median test.

Data interpolation

After the removal of outliers, missing vectors should be replaced by interpolated data []. One common technique is the 3·3 neighbourhood (3·3 mean) interpolation. Two-dimensional linear or spline interpolation [] are other alternatives. PIVlab uses a boundary value solver for interpolation. The approach provides an interpolation that is generally fairly smooth, and over larger regions with missing data, it will tend towards the average of the boundary velocities, which prevents overshooting.

Data smoothing

A certain amount of measurement noise will be inevitable in DPIV analyses []. Noise can be effectively reduced by applying data smoothing. Raffel et al. [] propose to perform a convolution of the data with a 2·2 or 3·3 kernel with equal weights. Another common and effective method to smooth DPIV data is median filtering []. More advanced smoothing algorithms are based on a penalized least squares method [] – the latter technique is implemented in PIVlab.

Data exploration

Many DPIV studies reveal very complex flow patterns. Such a complexity is hard to describe purely with vector maps. The strength of PIVlab is that it offers a large number of possibilities to further process and distil the results: Derivatives, such as vorticity and divergence can be calculated, data can be extracted from paths or areas and integral quantities can be calculated comfortably.

Quality control

Funtional testing has been performed on Windows XP and Windows 7 with MATLAB releases R2010a, R2011a and R2013b. Although the authors do not have access to additional operation systems and MATLAB versions, the reports from many users claim that PIVlab works flawlessly on Mac OS X and UNIX/Linux too. Bugs that have been discovered in some of the last ten PIVlab releases have been corrected before the next release. The user acceptance of PIVlab has been monitored and optimized while the software was used by more than 150 supervised students with several different operating systems and MATLAB releases.

Furthermore, extensive tests on the quality of the results obtained with PIVlab were performed using more than 6·104 synthetic particle images. The image properties were modified in a way that allowed to calculate the accuracy of all important PIV image parameters according to []. Detailed results of these tests are reported in []. Furthermore, the PIVlab toolbox contains a script that performs a fully automatic analysis of the accuracy of the DPIV analysis: The script ‘Accuracy.m’ will generate random PIV images, evaluate these images, and perform a comparison between the calculated and the real displacement of the interrogation areas. This script helps users to make sure that PIVlab works accurately on their combination of MATLAB and operating system.

Accuracy of the DPIV analyses

The quality of the DPIV measurements in PIVlab was extensively evaluated using synthetic particle images with known properties. The effect of particle image diameter, particle density, sensor noise, particle pair loss, motion blur and shear was determined and is reported in detail elsewhere []². These quality tests revealed that DFT using window deformation outperforms the basic DCC and DFT correlation, especially under challenging conditions. The additional computational load is compensated by the increased robustness and accuracy of the algorithm. Under optimal conditions (particle image diameter ≈ 3 pixels, particle density ≈ 5 - 15 particles/window, no noise, no particle pair loss, no motion blur, no shear) the bias error of the window deformation DPIV algorithms is smaller than 0.005 pixels and the random error is below 0.02 pixels.

Interpolation performance

The performance of the most popular interpolation techniques and the boundary value solver are tested: A number of real and synthetic image pairs are analyzed using standard DPIV (see Figure 8, left). An increasing amount of random vectors (0% to 15%) is removed from the resulting vector matrix (see Figure 8, middle). The missing data is interpolated using one of the interpolators (see Figure 8, right), and finally, the mean absolute difference between the original data and the interpolated data is determined. This whole procedure is repeated 1000 times for each image pair and each level of missing data. Under challenging conditions and large amounts of missing data, the boundary value solver which is implemented in PIVlab performs best (see Figure 9). More detail is given in [].

Figure 8 

Procedure for testing several interpolation techniques. Left: Original velocity data. Middle: Data is removed at random positions. Right: Gaps are filled with interpolation and compared to the original velocity data.

Figure 9 

Performance of popular interpolators. The boundary value solver performs best under the presence of larger amounts (> 10%) of missing data. n = 1000.

Smoothing performance

The performance of the smoothing algorithms is tested using DPIV data of synthetic particle images (more detail is given in []). Data smoothing – no matter what algorithm is used – always decreases the amount of noise introduced by the DPIV algorithm and hence increases the quality of the velocity estimation (see Figure 10). The application of the ’Smoothn’ algorithm [] reduces the difference between analysis and true velocities. It was therefore decided to implement this algorithm in PIVlab in order to further enhance the accuracy of the velocity estimation.

Figure 10 

Validation of smoothing algorithms. Left: Maximum absolute difference between the calculated velocities and the true velocities in percent of the maximum true velocity. Right: Mean absolute difference. n=250.

Support

Support is available through the website of PIVlab (http://PIVlab.blogspot.com), where questions can be posted and generally receive quick responses from the authors. The email address for support requests is also noted on this website. It furthermore contains news, tips, updates and tutorials for PIVlab users.

(2) Availability

Operating system

Based on MATLAB 7.10.0 (R2010a): Windows, UNIX/Linux, Macintosh

Programming language

MATLAB 7.10.0 (R2010a), upward compatible

Additional system requirements

MATLAB 7.10.0 (R2010a): 1 GB disk space, 1 GB RAM

Dependencies

MATLABs Image Processing Toolbox is required

List of contributors

WT: Programming of the tool and the graphical user interface (GUI); selection, implementation and evaluation of the algorithms; writing this article; support and maintenance of the tool.

EJS: Support with the layout of both the tool and the GUI; feedback, tips and discussions on the functionality.

Archive

Licence

CC-BY

Publisher

William Thielicke

Date published

03/07/2014

Language

English

(3) Reuse potential

The analysis of flow velocities is an integral part of many research disciplines. Here, DPIV has become a very popular method. The development of user-friendly, accurate and free DPIV tools greatly enhances the practicability and availability of this technique. In many cases, it can even be sufficient to use low cost digital cameras and lasers to study complex flows with PIVlab (e. g. []). The simple graphical user interface of PIVlab enables users that are new to the field of DPIV to generate accurate results quickly.

PIVlab has proved its value in a number of research projects. The tool has been used for ’conventional’ measurements of fluid velocities, but also in a very different context, for example flow visualizations within cells, echocardiographic velocimetry of the human heart and deformation of sand and gravel [, , , , , , , , , , , , , , , , , , , , , , , , ].

PIVlab can be extended with custom functionalities and features using MATLABs GUI editor and many of MATLABs pre-built functions. It is e. g. the basis of ’PTVlab’, an independent software tool that was designed to track particles. Individual functions of PIVlab (e. g. the image correlation code) can be used for custom projects.

PIVlab is highly integrated with MATLAB and benefits from MATLABs extensive plotting and data handling features. But data can also be exported to generic ASCII files, which can be processed e. g. with Excel (Microsoft, Redmond, WA). As PIVlab can also export to binary vtk files, Paraview (Kitware, Inc., Clifton Park, NY) is another very appropriate possibility to visualize and explore the flow data.