1 November 1998 Conjugate gradient and approximate Newton methods for an optimal probablilistic neural network for food color classification
Younes Chtioui, Suranjan Panigrahi, Ronald A. Marsh
Author Affiliations +
The probabilistic neural network (PNN) is based on the estimation of the probability density functions. The estimation of these density functions uses smoothing parameters that represent the width of the activation functions. A two-step numerical procedure is developed for the optimization of the smoothing parameters of the PNN: a rough optimization by the conjugate gradient method and a fine optimization by the approximate Newton method. The thrust is to compare the classification performances of the improved PNN and the standard back-propagation neural network (BPNN). Comparisons are performed on a food quality problem: french fry classification into three different color classes (light, normal, and dark). The optimized PNN correctly classifies 96.19% of the test data, whereas the BPNN classifies only 93.27% of the same data. Moreover, the PNN is more stable than the BPNN with regard to the random initialization. The optimized PNN requires 1464 s for training compared to only 71 s required by the BPNN.
Younes Chtioui, Suranjan Panigrahi, and Ronald A. Marsh "Conjugate gradient and approximate Newton methods for an optimal probablilistic neural network for food color classification," Optical Engineering 37(11), (1 November 1998). https://doi.org/10.1117/1.601972
Published: 1 November 1998
Lens.org Logo
CITATIONS
Cited by 28 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Algorithm development

Optical engineering

Image classification

Optimization (mathematics)

Expectation maximization algorithms

Agriculture

Back to Top