Withing a developed framework for Intrusion Detection named ELENIDS, there is the need to address the ”black box” problem inherent in cybersecurity models. As an ensemble architecture designed for AI-enhanced intrusion detection, ELENIDS needs human in-the-loop verification; namely, a security analyst must be able to trust and understand the system’s alerts. To achieve this, a multi-faceted explainability framework has been designed: the project aims at implementing such xAI framework with three layers of insight, each tailored to a specific stakeholder/ use case: (1) global model transparency for developers and auditors; (2) local prediction explanations for security analysts; and (3) human-readable rule extraction for policy creation and formal verification.