Explainable Machine Learning for Detecting Malicious Student Behavior in Campus Networks
Abstract
As digital infrastructure in higher education expands, campus networks face increasing threats from malicious student behaviors such as unauthorized resource access and exam-related cheating. While machine learning (ML) models have shown promise in anomaly detection, their lack of interpretability undermines trust and limits deployment in sensitive academic environments. This study proposes a hybrid explainable machine learning (XAI) framework that integrates XGBoost and LSTM for behavior classification, enhanced by SHAP and LIME for global and local interpretability. Tested on over 2.3 million real-world campus sessions, the system achieves an F1-score of 0.887 and an AUC-ROC of 0.931, while significantly improving administrator trust scores. A live deployment during exam periods further demonstrates its practical value, reducing response time and false positives, and supporting proportional policy enforcement. The results highlight the operational, ethical, and governance benefits of embedding explainability into campus cybersecurity systems.