Abstract
The essence of learning is for the learner to attain a significant level of comprehension after the learning process is completed. The quest to achieve this singular purpose has led to the introduction of several learning techniques in the conventional learning environment, such as asking questions and conducting test after class, just to mention a few. Additionally, technology has been introduced in learning. Even with technological advancements, the learning experience still faces the challenge of learners not attaining the optimum comprehension state after the learning process. This is due to the present systems' inability to model the learner to determine the best methods for achieving maximum comprehension. Hence, this research paper focuses on deriving an improved mathematical model for predicting the learning path to a learner’s optimum comprehension. The paper presented three learning instructional media (learning paths); textual, audio and a hybrid of audio and video, which this research uses in modelling the learner. This is to enable the improved system predict the best learning path to optimum comprehension for learners. This research paper adopted Reinforcement Learning and the Markov decision process, specifically the Markov Chain approach, in developing an improved model for prediction. The evaluation of this research involved brainstorming on the Bellman’s equation with the aid of the Markov Chain transition state framework, resulting in an improved mean value function of 71.7. This indicates an enhanced comprehension state for the learning students compared to the existing mean value function of 46.0. The results obtained from this research clearly demonstrate that the improved model was able to predict and assign the best learning path to achieve optimum comprehension state for learners.
References
M. S. Hasibuan, L. Nugroho , I. Paulus , P. I. Santosa, (2018). Model E-learning MDP for Learning Style Detection Using Prior Knowledge. International Journal of Engineering and Technology. 7(40). 118-122. 10.14419/ijet.v7i4.40.24416.
P. Mota , S.Francisco, and C. Lu´ısa,’’Modeling student’s self-studies behaviors’’, In Proceedings of the 14th International Conference on Autonomous Agents and Multiagent Systems(SSST 2015), pages 1521–1528.
W.C John. A Framework for a Multi-Faceted, Educational, Knowledge-Based Recommender System, Systemics, Cybernetics, and Informatics, Volume 14 - Number 4, Oct 2016, ISSN: 1690-4524
X. Zhu, L Ji , and L. Manuel. ‘’No learner left behind: On the complexity of teaching multiple learners simultaneously’’. In Proceedings of the 26th International Joint Conference on Artificial Intelligence,(Jun 2017) pages 3588–3594.
S. Doltsinis , P Ferreira , and N. Lohse , ‘’ An MDP model-based reinforcement learning approach for production station ramp-up optimization: Q-learning analysis’’, IEEE Trans. Syst. Man Cybern. Syst. (2014), vol. 44, no. 9, pp. 1125-1138.
M. S Hasibuan, L.E Nugroho, P.I Santosa , M.S Hasibuan, L.E Nugroho , and P. I Santosa . ‘’The Relevance of Markov Chain’’, 4th International Conference on Science and Technology (ICST): 7-8 Aug. 2018,18233754.
T Hamtini , ‘’A Proposed Dynamic Technique for Detecting Learning Style Using Literature-Based Approach’’, IEEE Jordan Conf. an Appl. Electr. Eng. Comput. Technol, vol. 3, 2nd ed., Jul. 2015, pp. 4301–4304.
L. Allen , L.S Erica , and S.M Danielle , ‘’Are You Reading My Mind? Modeling Students’’ Reading Comprehension Skills with Natural Language Processing Techniques. Proceedings of LAK '15, the 5th International Learning Analytics and Knowledge Conference, March 16 - 20, 2015, Poughkeepsie, NY, USA. ACM 978-1- 4503-341. pp 246-254
W.F Yahya , and N.M Noor. ‘’Decision Support System for Learning Disabilities Children in Detecting Visual-Auditory-Kinesthetic Learning Style’’, Aug 2015, 7th Int. Conf. Inf. Technol., vol. 2015, pp. 667–671.
M. Henry, P Worranat , J. Jeremy H. Adrian, W. Christopher , L. Calvin and K. Jimin, ‘’Markov chain Brilliant.org’’. [Online]. Available: https://brilliant.org/wiki/markov-chains/
K. Paul, (2018), ‘’Markov chains. Magoosh Statistics Blog’’, (Retrieved Jan 2018) [Online]. Available: https://magoosh.com/statistics/markov-chains/
S. Dervin, (2019). ‘’Introduction to Markov chains’’, (Retrieved from Data Science Central, Feb 2019), [Online]. Available https://www.datasciencecentral.com/profiles/blogs/introduction-to-markov-chains-what-are-markov-chains-when-to-use
S. Devin., (2018). ‘’Introduction to Markov chains. Medium’’. (Retrieved Oct 18 2018), [Online]. Available: https://towardsdatascience.com/introduction-to-markov-chains-50da3645a50d
M. Rouse, ‘’What is Markov model? - Definition from WhatIs.com. WhatIs.com’’, (Retrieved May 18 2017), [Online]. Available https://whatis.techtarget.com/definition/Markov-model
C. Balamurugan, S. Manoharan, K. Manoj Kumar, Mansoor Habib, ‘’Ciphering Transactions Using Cryptocurrency’’, International Journal of Research in Engineering, Science and Management Volume-3, jul 2020, Issue-3, p 564.www.ijresm.com| ISSN (Online): 2581-5792
R. Iskandar. (2018). A theoretical foundation for state-transition cohort models in health decision analysis. PloS one, 13(12), https://doi.org/10.1371/journal.pone.0205543.
K. N Martin, and L. Arroyo,’’AgentX: Using reinforcement learning to improve the effectiveness of intelligent tutoring systems’’, In Proc. 7th Int. Conf. Intelligent Tutoring Systems, pages 564{572, 2004
S. Rohit, ‘’Introduction to Markov Chains: Prerequisites, Properties & Applications’’ Retrieved Apr 7 2020 , [Online]. Available: https://www.upgrad.com/blog/introduction-to-markov-chains/

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright (c) 2024 Ifeanyi Isaiah Achi, Chukwuemeka Odi Agwu, Christopher Chizoba Nnamene, Sylvester C. Aniobi, Ifebude Barnabas C., Kelechi Christian Oketa, Godson Kenechukwu Ezeh, John Otozi Ugah