Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2023-33196PUBLISHED: 2023-05-26Craft is a CMS for creating custom digital experiences. Cross site scripting (XSS) can be triggered by review volumes. This issue has been fixed in version 4.4.7.
CVE-2023-33185PUBLISHED: 2023-05-26
Django-SES is a drop-in mail backend for Django. The django_ses library implements a mail backend for Django using AWS Simple Email Service. The library exports the `SESEventWebhookView class` intended to receive signed requests from AWS to handle email bounces, subscriptions, etc. These requests ar...
CVE-2023-33187PUBLISHED: 2023-05-26
Highlight is an open source, full-stack monitoring platform. Highlight may record passwords on customer deployments when a password html input is switched to `type="text"` via a javascript "Show Password" button. This differs from the expected behavior which always obfuscates `ty...
CVE-2023-33194PUBLISHED: 2023-05-26
Craft is a CMS for creating custom digital experiences on the web.The platform does not filter input and encode output in Quick Post validation error message, which can deliver an XSS payload. Old CVE fixed the XSS in label HTML but didn’t fix it when clicking save. This issue was...
CVE-2023-2879PUBLISHED: 2023-05-26GDSDB infinite loop in Wireshark 4.0.0 to 4.0.5 and 3.6.0 to 3.6.13 allows denial of service via packet injection or crafted capture file
User Rank: Apprentice
11/19/2015 | 6:42:56 PM
As the author notes:
"Any ML system must attempt to separate and differentiate activity based either on pre-defined (i.e. trained learning) or self-learned classifications"
Thus, for a cybersecurity machine learning system to be effective, it must have some principled and structured method to differentiate appropriate and inappropriate access. And, moreover, the system must have the correct context to make such a decision.
The author makes the statement that ML systems struggle to do this:
"Unfortunately, ML systems are not good at describing why a particular activity is anomalous, and how it is related to others. So when the ML system delivers an alert, you still have to do the hard work of understanding whether it is a false positive or not, before trying to understand how the anomaly is related to other activity in the system."
I would point the author to a new line of machine learning algorithms for access auditing called Explanation-Based Auditing.
A detailed peer-reviewed publication can be found at vldb.org/pvldb/vol5/p001_danielfabbri_vldb2012.pdf.
The general idea is to learn why accesses to data occur (e.g., the doctor accessed a record because of an appointment with the patient). This can be modeled as a graph search between the person accessing the data and the data accessed. When such an "explanation" is found, the system can determine the reason for access, filtering it away from manual review.
Thus, as the previous comment states, such as system can remove a tremendous amount of false positives, allowing the privacy or security officer to focus on the unexplained and suspicious.