Volume 2 Issue 4 | 2025 | View PDF
Paper Id:IJMSM-V2I4P102
doi: 10.71141/30485037/V2I4P102
Predicting Medication-Related Risk in Chronic Disease Management Using Machine Learning
Blessing Malcolm Awukam, Chinaza Felicia Nwakobe, Sunday Okafor
Citation:
Blessing Malcolm Awukam, Chinaza Felicia Nwakobe, Sunday Okafor, "Predicting Medication-Related Risk in Chronic Disease Management Using Machine Learning" International Journal of Multidisciplinary on Science and Management, Vol. 2, No. 4, pp. 8-21, 2025.
Abstract:
The Pressure Area Method (PAM) is a simplified yet code-recognized analytical technique employed to evaluate the structural adequacy of nozzle penetrations in pressure vessels. Based on a limit load concept, PAM equates the internal pressure force acting on the removed area of the vessel wall to the resisting force provided by the available reinforcement, including both the parent shell and any additional reinforcement such as pads. This study presents a comprehensive overview of the PAM framework, applicable to both cylindrical and spherical shells, and outlines the governing equations for configurations with and without reinforcing pads. A detailed nomenclature and explanation of code-dependent k-factors are provided to support implementation. A worked example involving a flush set-in nozzle with a reinforcing pad in a cylindrical shell is included to demonstrate the methodology’s practical application. Results show that the nozzle intersection governs the Maximum Allowable Working Pressure (MAWP), highlighting the method’s conservative yet effective design approach. While PAM does not explicitly account for stress concentrations or external load effects, it remains a robust, mechanics-based tool widely adopted in pressure vessel design standards. Its straightforward nature makes it highly suitable for spreadsheet-based implementation and early-stage design validation in both European and international contexts.
Keywords:
Machine Learning, Predictive Modelling, Medication Safety, Chronic Disease Management, Adverse Drug Events, Polypharmacy, Healthcare Informatics.
References:
1. Q. Hu, J. Sun, H. Zhang, X. Li, L. Chen, and Y. Liu, "Predicting Adverse Drug Event Using Machine Learning Based on Electronic Health Records: A Systematic Review and Meta-Analysis," Frontiers in Pharmacology, vol. 15, 2024.
2. S. Dey, H. Luo, A. Fokoue, J. Hu, and P. Zhang, "Predicting Adverse Drug Reactions through Interpretable Deep Learning Framework," BMC Bioinformatics, vol. 19, no. 1, p. 476, 2018.
3. J. Denck, F. Boehm, S. Eichhorn, A. Kramer, and F. Martin, "Machine-Learning-Based Adverse Drug Event Prediction Using Clinical Data: A Systematic Evaluation," Computer Methods and Programs in Biomedicine, vol. 239, 2023.
4. A. Farnoush, H. S. Alzahrani, R. Mansour, and S. Alotaibi, "Prediction of Adverse Drug Reactions Using Demographic and Non-Clinical Drug Characteristics in FAERS Data," Scientific Reports, vol. 14, no. 1, 2024.
5. A. Patel, S. Ramamoorthy, and J. H. Brown, "Predictive Modeling of Drug-Related Adverse Events with Limited Features in Real-World Electronic Health Record Data," CPT: Pharmacometrics & Systems Pharmacology, vol. 13, no. 6, pp. 567–578, 2024.
6. J. Amann, D. Vetter, S. N. Blomberg, H. C. Christensen, M. Coffee, S. Gerke, C. D. McLennan, and F. Blumenthal-Barby, "To Explain or not to Explain? Artificial intelligence Explainability in Clinical Decision Support Systems," PLOS Digital Health, vol. 1, no. 2, 2022.
7. S. Liu, Y. Chen, Z. Zhang, J. H. Wong, and L. Wei, "Leveraging Explainable Artificial Intelligence to Optimize Alert Criteria in Clinical Decision Support," Journal of the American Medical Informatics Association, vol. 31, no. 4, pp. 968–978, 2024.
8. Q. Abbas, A. T. Alenezi, and M. K. Khan, "Explainable Artificial Intelligence in Clinical Decision Support systems: State of the Art and Challenges," Healthcare (Basel), vol. 13, no. 17, 2025.
9. M. Khosravi, S. A. Shirmohammadi, H. R. Arabnia, and A. M. Rahmani, "Artificial Intelligence and Decision-Making in Healthcare: Opportunities, Challenges, and Ethical Implications," Frontiers in Artificial Intelligence, vol. 7, 2024.
10. N. Liu, A. A. Syed, Y. Liu, X. Zhang, and J. Jiang, "Machine Learning Models to Detect and Predict Patient Safety Events: A Systematic Review," International Journal of Medical Informatics, vol. 181, p. 105348, 2023.
11. L. Pierson, D. M. Cutler, and S. Obermeyer, "Algorithmic Bias and Fairness in Healthcare Machine Learning," Annals of Internal Medicine, vol. 176, no. 4, pp. 573–580, 2023.
12. J. H. Holmes, C. D. Challen, and A. Tsamados, "How Explainable Artificial Intelligence can Increase or Decrease Clinician Trust: A Mixed-Methods Study," JMIR AI, vol. 3, no. 1, e53207, 2024.
13. N. Norori, Q. Hu, F. D. Faraci, and A. Tzovara, "Addressing Bias in Big Data and AI for Healthcare: A Call for Open Science," Patterns, vol. 2, no. 10, 2021.
14. Z. Obermeyer, B. Powers, C. Vogeli, and S. Mullainathan, "Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations," Science, vol. 366, no. 6464, pp. 447–453, 2019.
15. M. T. Ribeiro, S. Singh, and C. Guestrin, "‘Why Should I Trust You?’: Explaining the Predictions of any Classifier," Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), San Francisco, CA, USA, pp. 1135–1144, 2016.
16. J. Amann, S. Blasimme, E. Vayena, A. Frey, and S. Madai, "Explainability for Artificial Intelligence in Healthcare: A Multidisciplinary Perspective," BMC Medical Informatics and Decision Making, vol. 20, no. 1, 2020.
17. K. H. Lee, J. Yang, and X. Li, "Continuous Learning Systems for Healthcare AI: Model Maintenance, Monitoring, and Governance," Frontiers in Digital Health, vol. 6, 2024.
18. M. P. Sendak, W. Ratliff, D. Sarro, E. Alderton, J. Futoma, M. Gao, M. Nichols, M. Revoir, and S. Balu, "Real-World Integration of Machine Learning Models into Clinical Care: Lessons from a Medication Risk Prediction System," BMJ Health Care Informatics, vol. 27, no. 1, 2020.
19. S. Liu, D. Han, and X. Zhang, "Explainable Machine Learning for Clinical Medication Risk Prediction," Artificial Intelligence in Medicine, vol. 142, 102596, 2023.
20. E. Nasarian, M. Mahdavi, A. Bayat, and P. Moradi, "Designing Interpretable Machine Learning Systems to Enhance Trust in Clinical Decision Support," Computer Methods and Programs in Biomedicine, vol. 242, 2024.
21. N. H. Shah and D. Magnus, "Implementing Machine Learning in Healthcare: Ethical Challenges and Opportunities," New England Journal of Medicine, vol. 386, no. 13, pp. 1211–1218, 2022.
22. W. N. Price and I. G. Cohen, "Privacy in the Age of Medical BIG data," Nature Medicine, vol. 25, no. 1, pp. 37–43, 2019.
23. P. S. Rajpurkar, E. Chen, M. Banerjee, and C. W. Johnson, "AI in Health and Medicine: Current Challenges and Future Directions," Nature Medicine, vol. 28, no. 12, pp. 2493–2506, 2022.
24. R. Challen, J. Denny, M. Pitt, L. Gompels, T. Edwards, and K. Tsaneva-Atanasova, "Artificial Intelligence, Bias and Clinical Safety," BMJ Quality & Safety, vol. 28, no. 3, pp. 231–237, 2019.
25. V. Hassija, V. Chamola, and A. Mahapatra, "Interpreting Black-Box Models: A Comprehensive Review on Explainable Artificial Intelligence (XAI) Techniques and Applications," Cognitive Computation, vol. 16, pp. 45–74, 2024.
26. A. Mahmud, M. Kaiser, T. McGinnity, and A. Hussain, "Deep Learning in Mining Biological Data: Ethical Implications and Interpretability," Cognitive Computation, vol. 13, pp. 1–33, 2021.
27. A. Tolera, B. Gebre, M. Kebede, and D. Tesfaye, "Barriers to Healthcare Data Quality and Recommendations in Public Health Facilities: A Qualitative Study," Frontiers in Digital Health, vol. 6, 1261031, 2024.