March 8, 2024

Fine tuning Large Language Models using Causal Inference for cybersecurity

Fine tuning Large Language Models using Causal Inference for cybersecurity

Generative adversarial networks (GANs) and other synthetic data models require diverse, representative training datasets to produce realistic outputs. For cybersecurity use cases, gathering sufficient data covering the full threat landscape is challenging. Causal inference provides an opportunity to extract high-quality real-world samples from organizational data to improve generative model training. This paper explains how causal analytics enhances datasets for better generative AI.

The Need for Robust Training Data

Successful GANs rely on training datasets that capture the true variations of real data. Insufficiently varied samples leads generative models to simply reproduce training data without generalization. Key data requirements for effective GAN training include:

Large sample size representing edge cases;
Diversity across benign and malicious scenarios;
Accuracy with limited labeling errors;
Completeness without missing attributes;
Balance across data categories;
Lack of sampling bias skewing distributions;
Obtaining a dataset meeting all these criteria is difficult in cybersecurity. This is where causal inference can help.

Causal Inference for Data Filtering

Causal modeling provides mechanisms to filter real-world data down to high-quality samples best suited for generative training:

Counterfactual analysis can simulate edge case cyber events like rare attacks.
Causal graphs highlight relationships between attributes, ensuring completeness.
Estimating causal effects surfaces examples with labeling errors to remove.
Balancing data based on causal associations mitigates sampling bias.
Prioritizing diversity using causal feature importance limits oversampled categories.
Together, these techniques extract a robust training dataset from operational data.

Use Cases and Applications

Some examples where causal analytics could improve generative model data include:

Filter endpoint data to a diverse sample covering attack vectors and mitigations;
Select a balanced set of network traffic encompassing vulnerabilities and hardened systems;
Extract a complete training set of phishing emails with validated labels;
Obtain a representative corpus of adversary communications from a broader intelligence database;
In each case, causal inference provides a systematic data refinement process.

Implementation Considerations

However, some implementation factors should be considered:

Causal models require their own large datasets for accurate inferences;
Oversimplified models may exclude useful samples inadvertently;
Human oversight is necessary to validate filtered datasets;
Generated training data should be routinely evaluated for drift;
Explainable causal models enable informed dataset review;
With proper monitoring and evaluation, organizations can leverage causal AI to significant improve generative model inputs.


Effective generative AI relies on robust training data covering the distributions of real-world examples. Manually gathering this data is infeasible in cybersecurity settings. Causal inference provides a force multiplier for extracting diverse, accurate, and representative datasets from operational data. Combined strategically, causal analytics and generative models can enhance the sophistication of cyber defenses across detection, prediction, and mitigation capabilities.