Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2022-2287PUBLISHED: 2022-07-02Out-of-bounds Read in GitHub repository vim/vim prior to 9.0.
CVE-2022-34911PUBLISHED: 2022-07-02
An issue was discovered in MediaWiki before 1.35.7, 1.36.x and 1.37.x before 1.37.3, and 1.38.x before 1.38.1. XSS can occur in configurations that allow a JavaScript payload in a username. After account creation, when it sets the page title to "Welcome" followed by the username, the usern...
CVE-2022-34912PUBLISHED: 2022-07-02An issue was discovered in MediaWiki before 1.37.3 and 1.38.x before 1.38.1. The contributions-title, used on Special:Contributions, is used as page title without escaping. Hence, in a non-default configuration where a username contains HTML entities, it won't be escaped.
CVE-2022-34913PUBLISHED: 2022-07-02** DISPUTED ** md2roff 1.7 has a stack-based buffer overflow via a Markdown file containing a large number of consecutive characters to be processed. NOTE: the vendor's position is that the product is not intended for untrusted input.
CVE-2022-2286PUBLISHED: 2022-07-02Out-of-bounds Read in GitHub repository vim/vim prior to 9.0.
User Rank: Apprentice
11/19/2015 | 6:42:56 PM
As the author notes:
"Any ML system must attempt to separate and differentiate activity based either on pre-defined (i.e. trained learning) or self-learned classifications"
Thus, for a cybersecurity machine learning system to be effective, it must have some principled and structured method to differentiate appropriate and inappropriate access. And, moreover, the system must have the correct context to make such a decision.
The author makes the statement that ML systems struggle to do this:
"Unfortunately, ML systems are not good at describing why a particular activity is anomalous, and how it is related to others. So when the ML system delivers an alert, you still have to do the hard work of understanding whether it is a false positive or not, before trying to understand how the anomaly is related to other activity in the system."
I would point the author to a new line of machine learning algorithms for access auditing called Explanation-Based Auditing.
A detailed peer-reviewed publication can be found at vldb.org/pvldb/vol5/p001_danielfabbri_vldb2012.pdf.
The general idea is to learn why accesses to data occur (e.g., the doctor accessed a record because of an appointment with the patient). This can be modeled as a graph search between the person accessing the data and the data accessed. When such an "explanation" is found, the system can determine the reason for access, filtering it away from manual review.
Thus, as the previous comment states, such as system can remove a tremendous amount of false positives, allowing the privacy or security officer to focus on the unexplained and suspicious.