Light-Weight Deep Convolutional Neural Network Model for Classification of Potato Leaf Diseases

Authors

  • Kazeem Sodiq Department of Computer Engineering, Yaba College of Technology Lagos, Nigeria
  • Ibrahim Adeyanju Department of Computer Engineering, Federal University Oye-Ekiti, Nigeria
  • Nnamdi Okomba 3Department of Computer Engineering, Federal University Oye-Ekiti, Nigeria
  • Ajagbe Taofik Department of Computer Science, Lagos State University Ojo, Lagos
  • Rufai Mohammed Department of Computer Technology, Yaba College of Technology Lagos, Nigeria

DOI:

https://doi.org/10.70112/ajes-2025.14.1.4268

Keywords:

Potato Leaf Diseases, Deep Convolutional Neural Network (DCNN), Disease Classification, Edge Devices, Precision Agriculture

Abstract

Potato leaf diseases pose a significant threat to global food security by reducing crop yields and economic productivity. Traditional manual inspection methods are often inefficient and error-prone, particularly in developing countries. Automated deep learning approaches provide a promising alternative for accurate and timely disease detection. This study develops a lightweight deep convolutional neural network (DCNN) for classifying potato leaf diseases, including early blight, late blight, and healthy leaves, while ensuring high accuracy, efficiency, and deployability on edge devices. A dataset of 2,152 potato leaf images, sourced from Kaggle, was preprocessed, augmented, and partitioned into 80% training, 10% validation, and 10% testing sets. A custom DCNN architecture (2.2M trainable parameters) was designed and compared against Xception, ResNet50, and InceptionV3 using precision, recall, F1-score, specificity, accuracy, and Cohen’s Kappa metrics. The proposed model outperformed existing architectures, achieving 97.21% accuracy, 93.92% F1-score, 95.83% precision, 92.33% recall, 98.38% specificity, and 95.00% Kappa score, with a compact size of 25.6 MB. Deployment on a Streamlit-based web application demonstrated real-time classification capabilities, achieving near-perfect accuracy (99.99%) for early and late blight detection. The lightweight DCNN offers an efficient, accurate, and deployable solution for potato disease classification, suitable for edge devices such as smartphones. This system empowers farmers with rapid, automated diagnostics, enabling timely interventions to mitigate crop losses. Future work will focus on extending the model to additional potato species and optimizing deployment for mobile platforms.

References

M. M. Uddin, and A. Khandakar, “Estimating blood pressure from the photoplethysmogram signal and demographic features using machinelearning techniques,” Sensors (Switzerland), vol. 20, no. 11, pp. 1-24, 2020, doi: 10.3390/s20113127.

[2]W. M. Ruth, M. K. Mohammed, I. W. Charles, and N. K. Wagara,“Selected emerging and reemerging plant pathogens affecting the food basket: A threat to food security,” J. Agric. Food Res., vol. 14, p. 100827, 2023, doi: 10.1016/j.jafr.2023.100827.

[3]T. D. Singh and R. Bharti, “Detection and classification of plantdiseases in crops (Solanum lycopersicum) due to pests using deeplearning techniques: A review,” Asian J. Comput. Sci. Technol., vol. 12, no. 2, pp. 39-44, 2023, doi: 10.51983/ajcst-2023.12.2.3735.

[4]J. Pujari, J. Wasim, A. Tripathi, and S. Bangarimath, “Comprehensive study on binary classification for CNN-LNet based urban changedetection using satellite images,” Asian J. Comput. Sci. Technol., vol. 14, no. 1, pp. 1-5, 2025, doi: 10.70112/ajcst-2025.14.1.4330.

[5]R. Pinky, S. Sastry, P. Vandana, D. Som, S. Sushil, L. Changan, and S. Brajesh, “Health-promoting compounds in potatoes: Tuber exhibitinggreat potential for human health,” Food Chem., vol. 424, p. 136368,2023, doi: 10.1016/j.foodchem.2023.136368.

[6]D. A. Navarre, A. Goyer, and R. Shakya, “Chapter 14-Nutritional value of potatoes: Vitamin, phytonutrient, and mineral content,” in Advances in Potato Chemistry and Technology, J. Singh and L. Kaur, Eds., SanDiego, CA, USA: Academic Press, 2009, pp. 395-424, doi: 10.1016/B9 78-0-12-374349-7.00014-3.

[7]I. Arel, C. Rose, and T. Karnowski, “Deep machine learning-A newfrontier in artificial intelligence,” IEEE Comput. Intell. Mag., vol. 5,pp. 13-18, Nov. 2010, doi: 10.1109/MCI.2010.938364.

[8]T. Astrid, K. Bart, D. Jiaqi, K. Ioannis, P. Michiel, S. Wouter, V. Remi, L.Belleghem, D. Van, and P. Bart, “An introduction to artificialintelligence in machine vision for postharvest detection of disorders in horticultural products,” Postharvest Biol. Technol., vol. 206,p. 112576, 2023, doi: 10.1016/j.postharvbio.2023.112576.

[9]Y. Bengio, “Learning deep architectures for AI,” Found. Trends Mach. Learn., vol. 2, no. 1, pp. 1-127, 2009, [Online]. Available:http://dx.doi.org/10.1561/2200000006.

[10]L. Deng, “An overview of deep-structured learning for informationprocessing,” in Proc. Asian-Pacific Signal Inf. Process. Annu. Summit Conf. (APSIPA-ASC), Oct. 2011, doi: 10.1017/atsip.2013.9.

[11]L. Deng, J. Li, K. Huang, D. Yao, F. Yu, M. Seide, G. Seltzer, X. Zweig, J.He, Y. Williams, Y. Gong, and A. Acero, “Recent advances in deeplearning for speech research at Microsoft,” in Proc. Int. Conf. Acoust., Speech, Signal Process. (ICASSP), 2013, pp. 8604-8608, doi: 10.1109/ICASSP.2013.6639345.

[12]G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V.Vanhoucke, P. Nguyen, T. Sainath, and B. Kingsbury, “Deep neuralnetworks for acoustic modeling in speech recognition,” IEEE SignalProcess. Mag., vol. 29, no. 6, pp. 82-97, Nov. 2012, doi: 10.1109/ MSP.2012.2205597.

[13]D. Yu and L. Deng, “Deep learning and its applications to signal andinformation processing,” IEEE Signal Process. Mag., vol. 28, no. 1,pp. 145-154, 2011, doi: 10.1109/MSP.2010.939038.

[14]L. Deng and D. Yu, “Deep learning: Methods and applications,” Found. Trends Signal Process., vol. 7, no. 3-4, pp. 197-387, 2014,doi: 10.1561/2000000039.

[15]A. Khandakar, M. E. H. Chowdhury, M. K. Kazi, K. Benhmed, F.Touati, and M. Al-Hitmi, “Machine learning based photovoltaics (PV) power prediction using different environmental parameters of Qatar,” Energies, vol. 12, no. 14, pp. 1-19, 2019, doi: 10.3390/en12142782.

[16]E. Ashkan, A. Alireza, Z. Rahim, and N. Younes, “Artificial intelligence and machine learning in energy systems: A bibliographicperspective,” Energy Strat. Rev., vol. 45, p. 101017, 2022,doi: 10.1016/j.esr.2022.101017.

[17]F. Touati, A. Khandakar, M. E. H. Chowdhury, A. J. S. P., C. K.Gonzales, K. Sorino, and K. Benhmed, “Photo-voltaic (PV) monitoring system, performance analysis and power prediction models in Doha,Qatar,” in Renewable Energy, IntechOpen, 2020, doi: 10.5772/intech open.92632.

[18]H. Jindal, D. Kumar, S. Ishika, and R. Kumar, “Role of artificialintelligence in distinct sector: A study,” Asian J. Comput. Sci. Technol., vol. 10, no. 1, pp. 18-28, 2021, doi: 10.51983/ajcst-2021.10.1.2696.

[19]P. R. Kumar, S. Ravichandran, and N. Satyala, “Deep learning analysis: A review,” Asian J. Comput. Sci. Technol., vol. 7, no. S1, pp. 24-28, 2018, doi: 10.51983/ajcst-2018.7.S1.1811.

[20]M. E. H. Chowdhury, A. Khandakar, K. Alzoubi, S. Mansoor, M. Tahir, and A. Reaz, “Real-time smart-digital stethoscope system for heartdiseases monitoring,” Sensors (Switzerland), vol. 19, no. 12, pp. 1-22, 2019, doi: 10.3390/s19122781.

[21]L. Alzubaidi, J. Zhang, and A. J. Humaidi, “Review of deep learning:Concepts, CNN architectures, challenges, applications, future directions,” Big Data, vol. 8, p. 53, 2021, doi: 10.1186/s40537-021-00444-8.

[22]Y. LeCun, P. Haffner, L. Bottou, and Y. Bengio, “Object recognitionwith gradient-based learning,” in Shape, Contour and Grouping inComputer Vision, Berlin, Heidelberg: Springer, pp. 319-345, 1999,doi: 10.1007/3-540-46805-6_19.

[23]J. G. A. Barbedo, “Plant disease identification from individual lesionsand spots using deep learning,” Biosyst. Eng., vol. 180, pp. 96-107, 2019, doi: 10.1016/j.biosystemseng.2019.02.002.

[24]K. P. Ferentinos, “Deep learning models for plant disease detection and diagnosis,” Comput. Electron. Agric., vol. 145, pp. 311-318, 2018,doi: 10.1016/j.compag.2018.01.009.

[25]A. Ramcharan, P. McCloskey, K. Baranowski, N. Mbilinyi, L. Mrisho, M.Ndalahwa, J. Legg, and D. P. Hughes, “A mobile-based deeplearning model for cassava disease diagnosis,” Front. Plant Sci.,vol. 10, p. 272, 2019, doi: 10.3389/fpls.2019.00272.

[26]B. Tugrul, E. Elfatimi, and R. Eryigit, “Convolutional neural networksin detection of plant leaf diseases: A review,” Agriculture, vol. 12, no. 8, p. 1192, 2022, doi: 10.3390/agriculture12081192.

[27]C. H. Bock, J. G. A. Barbedo, E. M. Del Ponte, D. Bohnenkamp, andA. K. Mahlein, “Visual estimates to fully automated sensor-based measurements of plant disease severity: Status and challenges forimproving accuracy,” Phytopathol. Res., vol. 2, p. 9, 2020, doi: 10.1186/s42483-020-00049-8.

[28]A. Khan, A. Sohail, U. Zahoora, and A. Qureshi, “A survey of therecent architectures of deep convolutional neural networks,” Artif. Intell. Rev., vol. 53, no. 8, pp. 5455-5516, 2020, doi: 10.1007/s10462-020-09825-6.

[29]S. Arya and R. Singh, “A comparative study of CNN and AlexNet fordetection of disease in potato and mango leaf,” in Proc. IEEE Int. Conf. Issues Challenges Intell. Comput. Techniques (ICICT), 2019, pp. 1-6, doi: 10.1109/ICICT46931.2019.8977648.

[30]M. Brahimi, K. Boukhalfa, and A. Moussaoui, “Deep learning oftomato diseases: Classification and symptoms visualization,” Appl. Artif. Intell., 2016, doi: 10.1080/08839514.2017.1315516.

[31]R. Kawasaki, H. Uga, S. Kagiwada, and H. Iyatomi, “Basic study ofviral plant diseases using convolutional neural networks,” in Proc. Int. Symp. Vis. Comput., 2015, pp. 638-645, doi: 10.1007/s00521-019-04228-3.

[32]L. Yang, S. Yi, N. Zebg, Y. Liu, and Y. Zhang, “Identification of ricediseases using deep convolutional neural networks,” Neurocomputing, vol. 267, pp. 378-388, 2017, doi: 10.1016/j.neucom.2017.06.023.

[33]A. Ferreria, D. Freitas, and G. da Silva, “Weed detection in soybeancrops using ConvNets,” Comput. Electron. Agric., vol. 143, pp. 314-324, 2017, doi: 10.1016/j.compag.2017.10.027.

[34]J. C. Noordam, G. W. Otten, T. J. Timmermans, and B. H. van Zwol,“High-speed potato grading and quality inspection based on a colorvision system,” in Machine Vision Applications in IndustrialInspection VIII, vol. 3966, pp. 206-218, 2000, doi: 10.1117/12.380075.

[35]A. Dacal-Nieto, E. Vázquez-Fernández, A. Formella, F. Martin, S.Torres-Guijarro, and H. González-Jorge, “A genetic algorithmapproach for feature selection in potatoes classification by computervision,” in Proc. 35th Annu. Conf. IEEE Ind. Electron. (IECON), 2009, pp. 1955-1960, doi: 10.1109/IECON.2009.5414871.

[36]G. ElMasry, S. Cubero, E. Moltó, and J. Blasco, “In-line sorting ofirregular potatoes by using automated computer-based machine vision system,” J. Food Eng., vol. 112, no. 1-2, pp. 60-68, 2012, doi: 10.1016/ j.jfoodeng.2012.03.027.

[37]C. Hou, J. Zhuang, Y. Tang, Y. He, A. Miao, H. Huang, and S. Luo,“Recognition of early blight and late blight diseases on potato leavesbased on graph cut segmentation,” J. Agric. Food Res., vol. 5, p. 100154, 2021, doi: 10.1016/j.jafr.2021.100154.

[38]R. Mahum, H. Munir, Z. U. N. Mughal, M. Awais, F. Sher Khan, M.Saqlain, and I. Tlili, “A novel framework for potato leaf diseasedetection using an efficient deep learning model,” Hum. Ecol. RiskAssess., vol. 29, no. 2, pp. 303-326, 2022, doi: 10.1080/10807039. 2022.2064814.

[39]W. Chen, J. Chen, A. Zeb, S. Yang, and D. Zhang, “Mobile convolution neural network for the recognition of potato leaf disease images,” Multimed. Tools Appl., pp. 1-20, 2022, doi: 10.1007/s11042-022-12620-w.

[40]M. Barnes, T. Duckett, G. Cielniak, G. Stroud, and G. Harper, “Visual detection of blemishes in potatoes using minimalist boostedclassifiers,” J. Food Eng., vol. 98, no. 3, pp. 339-346, 2010, doi: 10.101 6/j.jfoodeng.2010.01.010.

[41]K. Chakraborty, R. Mukherjee, C. Chakroborty, and K. Bora,“Automated recognition of optical image-based potato leaf blightdiseases using deep learning,” Physiol. Mol. Plant Pathol., vol. 117,p. 101781, 2022, doi: 10.1016/j.pmpp.2021.101781.

[42]D. Oppenheim and G. Shani, “Potato disease classification usingconvolution neural networks,” Adv. Anim. Biosci., vol. 8, no. 2,pp. 244-249, 2017, doi: 10.1017/S2040470017001376.

[43]N. Ruedeeniraman, M. Ikeda, and L. Barolli, “Performance evaluation of Vegecare tool for potato disease classification,” in Advances inNetworked-Based Information Systems, NBiS 2020, L. Barolli, K. Li,T.Enokido, M. Takizawa, Eds., Cham: Springer, 2021, p. 1264,doi: 10.1007/978-3-030-57811-4_47.

[44]W. Liu, Z. Wang, X. Liu, N. Liu, Y. Zeng, Y. Liu, and F. E. Alsaadi, “A survey of deep neural network architectures and their applications,” Neurocomputing, vol. 234, pp. 11-26, 2017, doi: 10.1016/j.neucom. 2016.12.038.

[45]S. Pouyanfar, S. Sadiq, Y. Yan, H. Tian, Y. Tao, M. P. Reyes, M. L.Shyu, S. C. Chen, and Iyengar, “A survey on deep learning:Algorithms, techniques, and application,” ACM Comput. Surv.(CSUR), vol. 51, no. 5, pp. 1-36, 2018, doi: 10.1145/3234150.

[46]M. Alom, T. Taha, C. Yakopcic, S. Westberg, P. Sidike, M. Nasri, M.Hasan, E. Van, A. Awwal, and V. Asari, “A state-of-the-art survey ondeep learning theory and architectures,” Electronics, vol. 8, no. 3,p. 292, 2019, doi: 10.3390/electronics8030292.

[47]C. Shorten and T. M. Khoshgoftaar, “A survey on image dataaugmentation for deep learning,” J. Big Data, vol. 6, no. 1, p. 60, 2019, doi: 10.1186/s40537-019-0197-0.

[48]A. M. Saleh and T. Hamoud, “Analysis and best parameters selectionfor person recognition based on gait model using CNN algorithm andimage augmentation,” J. Big Data, vol. 8, no. 1, pp. 1-20, 2021,doi: 10.1186/s40537-020-00387-6.

[49]D. Hirahara, E. Takaya, T. Takahara, and T. Ueda, “Effects of datacount and image scaling on deep learning training,” PeerJ Comput.Sci., vol. 6, p. 312, 2020, doi: 10.7717/peerj-cs.312.

[50]J. Salamon and J. P. Bello, “Deep convolutional neural networks anddata augmentation for environmental sound classification,” IEEE Signal Process. Lett., vol. 24, no. 3, pp. 279-283, 2017, doi: 10.1007/ 978-981-97-3180-0_61.

[51]H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng, “Convolutional deepbelief networks for scalable unsupervised learning of hierarchicalrepresentations,” in Proc. 26th Annu. Int. Conf. Mach. Learn., 2009,pp. 609-616, doi: 10.1145/1553374.1553453.

[52]K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770-778, doi: 10.1109/CVPR.2016.90.

[53]C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D.Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper withconvolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2015, pp. 1-9, doi: 10.1109/CVPR.2015.7298594.

[54]C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna,“Rethinking the inception architecture for computer vision,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 2818-2826, doi: 10.1109/CVPR.2016.308.

[55]F. Chollet, “Xception: Deep learning with depthwise separableconvolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 1251-1258, doi: 10.1109/CVPR.2017.195.

[56]R. A. Sholihati, I. A. Sulistijono, A. Risnumawan, and E. Kusumawati, “Potato leaf disease classification using deep learning approach,” in 2020 Int. Electron. Symp. (IES), Surabaya, Indonesia, pp. 392-397, 2020, doi: 10.1109/IES50839.2020.9231784.

[57]A. Islam and H. Sikde, “A deep learning approach to classify the potato leaf disease,” J. Adv. Math. Comput. Sci., vol. 37, no. 12, pp. 143-155, 2022, doi: 10.9734/JAMCS/2022/v37i121735.

[58]A. R. Nishad, M. A. Mitu, and N. Jahan, “Predicting and classifyingpotato leaf disease using K-means segmentation techniques and deeplearning networks,” Procedia Comput. Sci., vol. 212, pp. 220-229, 2022, doi: 10.1016/j.procs.2022.11.006.

[59]B. Samatha, D. K. Rao, T. Syamsundararao, G. Mani, N. Karyemsetty,and M. V. B. T. Santhi, “Classification of potato diseases using deep

learning approach,” in 2023 Int. Conf. Intell. Innov. Technol. Comput., Electr. Electron. (IITCEE), Bengaluru, India, pp. 508-513, 2023, doi: 10.1109/IITCEE57236.2023.10090983.

[60]H. Oishi, Z. Habaragamuwa, S. Yu, R. A., K. A., S. Kotaro, H.Hiroyuki, and F. Taketo, “Automated abnormal potato plant detectionsystem using deep learning models and portable video cameras,” Int. J.Appl. Earth Obs. Geoinf., vol. 104, p. 102509, 2021, doi: 10.1016/j. jag.2021.102509.

Downloads

Published

10-05-2025

How to Cite

Sodiq, K., Adeyanju, I., Okomba, N., Taofik, A., & Mohammed, R. (2025). Light-Weight Deep Convolutional Neural Network Model for Classification of Potato Leaf Diseases. Asian Journal of Electrical Sciences, 14(1), 35–46. https://doi.org/10.70112/ajes-2025.14.1.4268

Similar Articles

<< < 1 2 3 4 5 6 7 8 > >> 

You may also start an advanced similarity search for this article.