Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2022-35942PUBLISHED: 2022-08-12
Improper input validation on the `contains` LoopBack filter may allow for arbitrary SQL injection. When the extended filter property `contains` is permitted to be interpreted by the Postgres connector, it is possible to inject arbitrary SQL which may affect the confidentiality and integrity of data ...
CVE-2022-35949PUBLISHED: 2022-08-12
undici is an HTTP/1.1 client, written from scratch for Node.js.`undici` is vulnerable to SSRF (Server-side Request Forgery) when an application takes in **user input** into the `path/pathname` option of `undici.request`. If a user specifies a URL such as `http://127.0.0.1` or `//127.0.0.1` ```js con...
CVE-2022-35953PUBLISHED: 2022-08-12
BookWyrm is a social network for tracking your reading, talking about books, writing reviews, and discovering what to read next. Some links in BookWyrm may be vulnerable to tabnabbing, a form of phishing that gives attackers an opportunity to redirect a user to a malicious site. The issue was patche...
CVE-2022-35956PUBLISHED: 2022-08-12
This Rails gem adds two methods to the ActiveRecord::Base class that allow you to update many records on a single database hit, using a case sql statement for it. Before version 0.1.3 `update_by_case` gem used custom sql strings, and it was not sanitized, making it vulnerable to sql injection. Upgra...
CVE-2022-35943PUBLISHED: 2022-08-12
Shield is an authentication and authorization framework for CodeIgniter 4. This vulnerability may allow [SameSite Attackers](https://canitakeyoursubdomain.name/) to bypass the [CodeIgniter4 CSRF protection](https://codeigniter4.github.io/userguide/libraries/security.html) mechanism with CodeIgniter ...
User Rank: Apprentice
11/19/2015 | 6:42:56 PM
As the author notes:
"Any ML system must attempt to separate and differentiate activity based either on pre-defined (i.e. trained learning) or self-learned classifications"
Thus, for a cybersecurity machine learning system to be effective, it must have some principled and structured method to differentiate appropriate and inappropriate access. And, moreover, the system must have the correct context to make such a decision.
The author makes the statement that ML systems struggle to do this:
"Unfortunately, ML systems are not good at describing why a particular activity is anomalous, and how it is related to others. So when the ML system delivers an alert, you still have to do the hard work of understanding whether it is a false positive or not, before trying to understand how the anomaly is related to other activity in the system."
I would point the author to a new line of machine learning algorithms for access auditing called Explanation-Based Auditing.
A detailed peer-reviewed publication can be found at vldb.org/pvldb/vol5/p001_danielfabbri_vldb2012.pdf.
The general idea is to learn why accesses to data occur (e.g., the doctor accessed a record because of an appointment with the patient). This can be modeled as a graph search between the person accessing the data and the data accessed. When such an "explanation" is found, the system can determine the reason for access, filtering it away from manual review.
Thus, as the previous comment states, such as system can remove a tremendous amount of false positives, allowing the privacy or security officer to focus on the unexplained and suspicious.