explainable AI

Explainable AI and robust forecasting of global salary trends: Addressing data drift and unseen categories with tree-based models

This article studies salary prediction under distributional drift using explainable boosting models and hybrid forecasting.  We integrate unseen-aware feature engineering, robust objectives, SHAP-based interpretability, drift detection, and time-series forecasting (Prophet/SARIMAX) on multi-year data (2020–2024), and report a comprehensive evaluation aligned with typical MMC guidelines.  Modern salary data are heterogeneous, heavy-tailed, and non-stationary.  Therefore we combine robust tree-based learners with drift monitoring and explainable forecasting to prioritize

Real-time Anomaly Detection in Distributed Iot Systems:a Comprehensive Review and Comparative Analysis

The rapid expansion of the Internet of Things (IoT) has resulted in a substantial increase of diverse data from distributed devices. This extensive data stream makes it increasingly important to implement robust and efficient real-time anomaly detection techniques that can promptly alert about issues before they could escalate into critical system failures.