EXPLAINABLE ARTIFICIAL INTELLIGENCE MODELS FOR HIGH-STAKES PREDICTIVE ANALYTICS
Keywords:
Explainable Artificial Intelligence, High-Stakes Predictive Analytics, Model Interpretability, Algorithmic Fairness, Trustworthy AI, Responsible Machine LearningAbstract
The increasing level of deployment of the artificial intelligence–driven predictive analytics in the high-stakes domains such as the healthcare, finance, criminal justice, as well as the public policy has well intensified the actual concerns regarding the transparency, accountability, as well as ethical reliability. More complex machine learning models can have a high predictive accuracy, but because of their black-box nature, lead to a lower amount of trust and regulation compliance and accountable decision-making in the context where error can be having a socially, legally, and economically important effect. The concept of Explainable Artificial Intelligence (XAI) has emerged as one of the crucial paradigms that are likely to address these issues as it should make the model behavior explainable, without additional consideration of the model performance. The paper provides a syntactic review of explainable AI models to high-stakes predictive analytics, founded on a mixed-method research process, which incorporates a conceptual analysis, model comparison, and empirical testing. The paper compares the black box models with the intrinsically interpretable and explainable hybrid model on numerous factors including predictive performance, explainability, bias and fairness and interpretability of the stakeholders. The findings show that even though accuracy has been marginally improved, the model of explainability creates competitive outcomes and is much superior in transparency, bias detection, ethical behavior, and end-user trust. In addition, explainability is found to be advantageous to human control and trust in decisions, which is a decisive factor in the future application of AI systems in the high-risk environment, which is an essential aspect of the institutional implementation of AI systems. In the paper, a comprehensive assessment framework is provided and empirical justification of explainable AI as one of the essential requirements of ethical and responsible high-stakes predictive analytics are provided.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2025 International Journal of Computational Intelligence and Emerging Technologies

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
All articles published in the International Journal of Computational Intelligence and Emerging Technologies are licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).
This license allows others to download and share the work with proper attribution, but it cannot be changed in any way or used commercially. Authors retain the copyright of their work, while granting the journal the right of first publication.
