Overview
Seculyze leverages advanced machine learning (ML) to automatically classify cyber alerts into actionable categories, helping security teams focus on what truly matters. This feature, known as Tuning, dramatically reduces alert fatigue by distinguishing between:
✅ True Positive – Real threats requiring attention
❓ Undetermined – Alerts needing human review
❌ False Positive – Benign activity or noise
In the Seculyze interface, each incident will be appanded with a chip according to the MLs determination.
Purpose of Tuning
The goal of Tuning is to optimize detection accuracy while reducing time spent on irrelevant alerts. Our system learns from historical behavior and threat intelligence, allowing for:
Faster response to real threats
Fewer manual investigations
Continuous adaptation to new environments
How the ML Model Works
Architecture
The core of our system is a type of Neural Network, designed to process complex cybersecurity data patterns. Key components include:
Input Layer: Ingests normalized alert features using Batch Normalization
Hidden Layers: Flexible depth and size, activated with GELU functions to model non-linear patterns
Output Layer: A Variational Bayesian Linear (VBLinear) layer for uncertainty-aware classification
Output
The model outputs a score between 0 and 1, which is used to determine:
True Positive (i.e., score > S1)
False Positive (i.e., score < S2)
Undetermined (when uncertainty is larger than a defined threshold)
Training Strategy
Two training approaches are used in combination or individually:
From Scratch: A new model tailored to specialized requirements
Fine-Tuning: A model that is bootstrapped from a pre-trained model to customer-specific data, improving speed and accuracy
Data Sources
To ensure high-quality predictions, the model is trained and fine-tuned using:
Historical alert data from consenting customers
Threat intelligence feeds
The customer's own alert data
This combination allows the model to be both globally informed and locally adapted.
Continuous Monitoring & Evaluation
The ML model’s performance is constantly evaluated using metrics such as:
F1 Score
Confusion Matrix
Model Confidence Intervals (via VBLinear output)
This ensures that the system evolves and maintains optimal classification performance.