Bayes’ Theorem essentially measures the degree of belief, or certainty in something. The Cybersecurity industry has been challenged with the idea of sensitivity and specificity for quite some time. In computer science this can be codified via binary classification (decision trees, Bayesian networks, support vector machines, neural networks).
Many industry Cybersecurity solutions will be relegated to the dustbin as a result of not improving and evolving upon this concept over time.
True Positive: Correctly identified malicious pattern.
False Positive: Incorrectly identified malicious pattern.
True Negative: Correctly rejected non-malicious pattern.
False Negative: Incorrectly rejected malicious pattern.
We need a Cybersecurity view that gives us degrees of sensitivity and specificity, while preserving the context of observed patterns. The classifiers need to be accurate, and to bring as much context to the analyst as possible, in a simple intuitive manner. The full view should be available to us and the signal noise ratio can be optimized visually so that analysts gravitate towards the tooling. And finally, when a False Positive does appear, we need the ability to train the system to identify it as a True Negative where necessary. And not suppress the data.
Some of the traits that people bring are domain knowledge, preconceptions, inefficiencies, learned behaviors, social ideals, etc. Basically, people need the ability to help train the systems to learn our respective environments, by triaging False Positives into True Negatives. The desired outcome is to increase accuracy of True Positives, and minimize False Negatives (really bad).
I second the statement made by David Sheidlower “Everything that happens on that network should be explainable.” I would add to it and say, that many explanations can be codified and classified by machines, in algorithms, that we can train over time. Making the job we do easier and more accurate.