Machine vision based quality evaluation of Iyokan orange fruit using neural networks
Introduction
In Japan, fruit classification is an essential operation after harvesting. Many varieties of fruits are usually graded based only on their external factors including size, mass, and shape; however their internal qualities are not predictable following the grading operation. There is a need to measure the internal qualities, such as sugar content and acid content, non-destructively. Peach fruit is classified into several grades based on its sugar content measured by using reflected light in infrared region from inside of fruit through thin skin. However, it is difficult to measure sugar content of orange fruit using the same method, because its skin is thick and usual light cannot penetrate the skin effectively.
Most farmers claim that they can identify sweet orange fruits based on their experience. Generally speaking, a sweet orange fruit, especially variety Iyokan, is believed to have a reddish color, medium size, low profile and glossy surface. However, this criterion is ambiguous and varies among individual farmers and locations. Furthermore, the factors influencing the sugar content seem to be interacting with one another.
There are many applications of machine vision systems for substituting human visual senses (Marchant, 1996, Marchant et al., 1997, Davies, 1997, Mcfarlane et al., 1997). Machine vision can replace human visual judgment by providing a more consistent and reliable system mainly because a machine is likely to work independently from subjective factors. While the size or weight of the orange fruit can be measured with other methods, visual observation, such as red color level, texture, and shape of the fruit, can be obtained using machine vision. These measurements can be used to develop neural networks to provide automatic evaluation of the Iyokan orange quality.
Neural networks are characterized by their self-learning capability. A neural network is presented with a training set of data consisting of a group of examples from which the network can learn. The training data, known as training patterns, are represented as vectors and can be obtained from machine vision images. Supervised-learning is applied for the most common neural network training procedure. In the supervised-learning process, the neural network is presented with an input pattern together with the target output for that pattern. The target output is supposed to be the correct corresponding outcome for the input pattern. In response to this paired data, the neural network adjusts the values of its synapse weights. This procedure is called neural network training. If the training is successful, the trained neural networks can produce the correct answer in response to each input pattern. Neural networks provide a potential for building computer models without the need of programming because they are capable of learning by examples (Murase et al., 1998). The learning performance of the neural network is of extreme importance for the model builders especially in the case where the training data contain significant amounts of noise in the measurements. The Kalman filter can be used as a learning algorithm for the neural network (Murase and Koyama, 1991, Murase et al., 1992, Murase et al., 1994).
In this study, the feasibility of quality evaluation of Iyokan orange fruit using machine vision and neural network techniques was investigated for the purpose of automating the orange fruit grading operation.
Section snippets
Materials
Thirty Iyokan orange fruits (Miyauchi Iyokan) harvested at an orchard in Ehime Prefecture (where 90% of Iyokan orange fruits are produced in Japan) were used. A color TV camera and an image grabber board with 256×256 pixels capturing capability were connected to a PC as a vision system. Sugar content and pH of the fruits were measured using standard equipment at the end of the experiment. The pH values were assumed to be an indication of acid contents.
Fig. 1 shows sample images of Iyokan orange
Relationship between independent parameters and sugar content
Fig. 2, Fig. 3 show the relationships between R/G and sugar content and H/W and sugar content, respectively. The R/G value shows a weak correlation with sugar content. The distribution, however, seems that higher R/G value (for reddish fruit), correlates with higher sugar content for the sweetness. Fig. 3 shows that larger H/W fruit's sugar content is lower. The same result was also obtained from relation between Feret's diameter ratio and sugar content. Size or weight was also confirmed by
Conclusion
The feasibility to evaluate the quality of Iyokan orange fruit non-destructively using machine vision systems and neural networks was investigated. It was concluded that more image features than what were used in this study and a modeling tool like neural networks were necessary to construct a more accurate non-destructive fruit quality evaluation system.
References (9)
Tracking of row structure in three crops using image analysis
Comput. Electron. Agric.
(1996)- et al.
Row-following accuracy of an automonous vision-guided agricultural vehicle
Comput. Electron. Agric.
(1997) - et al.
Image analysis for pruning of long wood grape vines
J. Agric. Eng. Res.
(1997) Machine Vision: Theory, Algorithms, Practicalities
(1997)
Cited by (101)
Nondestructive prediction of fruit detachment force for investigating postharvest grape abscission
2024, Postharvest Biology and TechnologyCo-occurrence patterns based fruit quality detection for hierarchical fruit image annotation
2022, Journal of King Saud University - Computer and Information SciencesCitation Excerpt :Otherwise, if green bin locations are greater than the threshold value BC = 50 and red bin locations are less than the threshold value BD = 100 and the count is greater than one, then the stem is categorized as the brown stem. Texture features contain valuable information therefore these are suitable for classification and fruit quality assessment (Kondo et al., 2000; Arivazhagan et al., 2010). It separates various patterns in fruit images with the extraction of pixel intensity values.
Electronic eye for food sensory evaluation
2019, Evaluation Technologies for Food QualityMass estimation of mango fruits (Mangifera indica L., cv. ‘Nam Dokmai’) by linking image processing and artificial neural network
2019, Engineering in Agriculture, Environment and FoodFeature Normalization Reweighting Regression Network for Sugar Content Measurement of Grapes
2022, Applied Sciences (Switzerland)