Deep Q Networks for Classification: Performance Analysis on Iris and Diabetes Datasets
DOI:
https://doi.org/10.71330/thenucleus.2025.1461Abstract
This study explores how Deep Q Networks (DQNS) can be applied to classifying data with standard datasets. Though DQNs are designed for sequential decisions, this study uses the Iris and Diabetes datasets, which are both used for classification, to test the performance. It investigates DQNs with different data imbalances and compares their results based on the quantity of data. Models are assessed using accuracy, precision, recall, F1 score, ROC-AUC, the amount of memory required, and their training speed. As demonstrated by the results, DQNs can match other algorithms in accuracy while attaining accuracy rates between 84% and 95% for various datasets and modified configurations. Based on the study, performance can be affected by adjusting hyperparameters and the distribution of data classes. DQNs are more complex than common classifiers such as SVMs and decision trees, and their main contribution in simple classification is yet to be proven. It adds to the enthusiasm for using reinforcement learning models in the context of supervised learning. The paper highlights the value of correct evaluation, points out risks linked to model over-fitting and includes new areas to pursue in the future such as benchmarking, clarifying models and using hybrid systems.
References
J. S. Park and J. H. Park, “Enhanced machine learning algorithms: deep learning, reinforcement learning, and q-learning,” J. Inf. Process. Syst., vol. 16, no. 5, pp. 1001–1007, 2020.
K. Kim, “Multi-agent deep q network to enhance the reinforcement learning for delayed reward system,” Appl. Sci., vol. 12, no. 7, pp. 3520, 2022.
X. Wang et al., “Deep reinforcement learning: A survey,” IEEE Trans. Neural Networks Learn. Syst., vol. 35, no. 4, pp. 5064–5078, 2022.
B. Peng et al., “End-to-End Autonomous Driving Through Dueling Double Deep Q-Network,” Automot. Innov., vol. 4, no. 3, pp. 328–337, 2021.
N. Gholizadeh, N. Kazemi, and P. Musilek, “A comparative study of reinforcement learning algorithms for distribution network reconfiguration with deep Q-learning-based action sampling,” Ieee Access, vol. 11, pp. 13714–13723, 2023.
P. S. Chib and P. Singh, “Recent Advancements in End-to-End Autonomous Driving Using Deep Learning: A Survey,” IEEE Trans. Intell. Veh., vol. 9, no. 1, pp. 103–118, 2024.
H. Taherdoost, “Deep learning and neural networks: Decision-making implications,” Symmetry (Basel)., vol. 15, no. 9, p. 1723, 2023.
C. Elendu et al., “The impact of simulation-based training in medical education: A review,” Medicine (Baltimore)., vol. 103, no. 27, p. e38813, 2024.
D. Coelho and M. Oliveira, “A Review of End-to-End Autonomous Driving in Urban Environments,” IEEE Access, vol. 10, no. July, pp. 75296–75311, 2022.
K. Sahaja and L. Pudukarapu, "Preprint (not peer-reviewed)," Preprint, vol. 6, no. 6, pp. 1–4, 2019.
S. Grigorescu, B. Trasnea, T. Cocias, and G. Macesanu, “A survey of deep learning techniques for autonomous driving,” J. F. Robot., vol. 37, no. 3, pp. 362–386, 2020.
J. Yang et al., “Deep reinforcement learning for multi-class imbalanced training: applications in healthcare,” Mach. Learn., vol. 113, no. 5, pp. 2655–2674, 2024.
C. Lu et al., “Structured state space models for in-context reinforcement learning,” Adv. Neural Inf. Process. Syst., vol. 36, pp. 47016–47031, 2023.
T. M. Ghazal, M. A. M. Afifi, and D. Kalra, “Data Mining and Exploration: A Comparison Study among Data Mining Techniques on Iris Data Set,” Talent Dev. & Excell., vol. 12, no. 1, pp. 3854–3861, 2020.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 The Nucleus

This work is licensed under a Creative Commons Attribution 4.0 International License.
For all articles published in The Nucleus, copyright is retained by the authors. Articles are licensed under an open access licence [CC Attribution 4.0] meaning that anyone may download and read the paper for free. In addition, the article may be reused and quoted provided that the original published version is cited properly.