Classification of Fruit Ripeness with Model Descriptor Using Vgg 16 Architecture

Main Article Content

Asep Nana Hermana
Dewi Rosmala
Milda Gustiana Husada

Abstract

The quality of the fruit is largely determined by the level of ripeness contained by the fruit itself. Until now, determining the level of fruit maturity is still done manually, as a result there are differences in perceptions in determining the level of fruit maturity. Therefore we need a system that is able to classify fruit maturity automatically. This research was conducted on 4 objects, namely apples, oranges, mangoes, and tomatoes. The training was conducted with split data with a presentation 70:20:10 based on 4 test scenarios, the data was converted to RGB to L * a * b first and some were not converted and were immediately trained using CNN VGG16 with the transfer learning method where fine tuning would be done on the layer block 5 and modification of the classification layer using the Multi-SVM classifier. The highest accurasi reach 92% at scenario 4 with 90 data per class.

Article Details

How to Cite
Hermana, A., Rosmala, D., & Husada, M. (2023). Classification of Fruit Ripeness with Model Descriptor Using Vgg 16 Architecture. Journal on Education, 5(3), 5587-5596. https://doi.org/10.31004/joe.v5i3.1315
Section
Articles

References

Vibhute, Anup, dan Bodhe, S.K. (2013) : Outdoor Illumination Estimation of Color Images. IEEE, Communication and Signal Processing hal 331-334.
Faria, F. A., Dos Santos, J. A., Rocha, A., dan Da S. Torres, R. (2012): Automatic classifier fusion for produce recognition, 2012 25th SIBGRAPI Conference on Graphics, Patterns and Images, Ouro Preto, Brazil, IEEE, 252–259.
Hou, L., Wu, Q., Sun, Q., Yang, H., dan Li, P. (2016): Fruit recognition based on convolution neural network, 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery, Changsha, China, IEEE, 18–22.
Jiang, L., Koch, A., Scherer, S. A., dan Zell, A. (2013): Multi-class fruit classification using RGB-D data for indoor robots, 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), Shenzhen, China, IEEE, 587– 592.
Kuang, H.-L., Chan, L. L. H., dan Yan, H. (2015): Multi-class fruit detection based on multiple color channels, 2015 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR), Guangzhou, China, IEEE, 1– 7.
Kuang, H., Hang Chan, L. L., Liu, C., dan Yan, H. (2016): Fruit classification based on weighted scorelevel feature fusion, Journal of Electronic Imaging, 25, 1–11.
Lu, S., Lu, Z., Phillips, P., Wang, S., Wu, J., dan Zhang, Y. (2016): Fruit classification by HPA-SLFN, 2016 8th International Conference on Wireless Communications & Signal Processing (WCSP), Yangzhou, China, IEEE, 1–5.
Kapach, K., Barnea, E., Mairon, R., Edan, Y., dan Ben-Shahar, O. (2012): Computer vision for fruit harvesting robots – state of the art and challenges ahead, International Journal of Computational Vision and Robotics, 3, 4–34..
F. Chollet, Deep Learning with Phyton. 2018.
S. J. Pan and Q. Y. Fellow, “A Survey on Transfer Learning,” pp. 1–15, 2010.
M. Elgendy, “Deep Learning for Vision Systems,” p. 475, 2019.
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large- scale image recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–14, 2015