
Machine learning (ML) is a key part of improving healthcare analytics such as resource planning, disease diagnosis, prognosis, and risk stratification. However powerful they are, uncertainty is inherent and ubiquitous in ML models because their predictions are affected by noisy data, model limitations, and unseen scenarios. To address this challenge, some of the most widely used tools are ensemble tree-based models, which help in managing and quantifying uncertainty in predictions. They are highly accurate, interpretable, and efficient with structured data, resulting in lower computational demand. Unlike deep neural networks, which require large amounts of unstructured data such as images and text, they have lower computational requirements and are more interpretable. These include random forest (RF), gradient boosting machines (GBM), and extreme gradient boosting (XGBoost). These models are robust against noise and can handle large, complicated datasets, which are common in healthcare. Ensemble tree approaches are different from traditional ML models as they can efficiently capture complex, nonlinear relationships and slight interactions among the features. This leads to highly accurate and generalizable predictions. Ensemble models have many advantages. They combine the strengths of multiple base learners which reduces overfitting, improves stability, and enhances the model's ability to generalize to unseen data unlike traditional ML models. It has been shown that using a group of classifiers to make predictions outperforms using individual classifiers to predict heart disease. This is important for making reliable decisions, which makes them very useful in healthcare. Despite their strengths, ensemble tree-based models have two main problems. First, they are difficult to interpret, particularly when there are large numbers of constituent trees and features. In model selection, the first choice is to avoid this "black box" nature. To address this, explainable AI (XAI) techniques such as Shapley additive explanations (SHAP) or Shapley values, which are rooted in cooperative game theory, have emerged as principled frameworks for attributing the contribution of each feature to individual predictions in ML models. The field of XAI is rapidly evolving: counterfactual paths (CPATH) identifies feature permutations that influence model predictions and uses domain knowledge graphs; total causal effect calculation for fuzzy cognitive maps (TCEC-FCM) uses graph traversal techniques to compute causal effects, improving system transparency; and reciprocal human-machine learning (RHML) fosters continuous learning between humans and AI models, improving model performance and decision-making. Other recent methods include diverse counterfactual explanations (DiCE), which provides counterfactual explanations that align with human cognitive processes and are highly valued for understanding a model. Logic tensor networks (LTN) is a framework rooted in neural-symbolic AI, designed for learning and logical reasoning. It facilitates interactive explainability and model revision when applied to XAI. Template system for natural language explanations (TS4NLE) is designed for the presentation and rendering of explanations derived from other XAI outputs, prioritizing human comprehensibility. It can process the structured output of any approach and generate natural language explanations (NLEs), which are then rendered into tailored natural language text using a template system. These frameworks aim to make AI systems more transparent and aligned with human understanding. Despite these advancements, SHAP remains a popular candidate for studying XAI due to its strong theoretical foundation and widespread adoption. However, calculating Shapley values can be difficult for complex models. Fortunately, a couple of new, efficient methods for calculating them for certain types of models have recently emerged such as TreeSHAP. TreeSHAP calculates Shapley values in tree-based models including decision trees, RFs, and XGBoost in a fast and efficient way. TreeSHAP uses the natural structure of decision trees to make predictions that are much faster and easier to understand than those of other methods. This is helpful for explaining results with large groups of data, which is important in fields where understanding predictions is paramount, for instance in healthcare or finance. SHAP values provide a clear framework for determining how each feature contributes to individual predictions. They offer insight into the decision-making process behind complex ensemble models. SHAP and similar methods not only encourage trust but also make it easier to use advanced ML models in healthcare by making them easier to comprehend.