Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2021-23270PUBLISHED: 2021-04-12
In Gargoyle OS 1.12.0, when IPv6 is used, a routing loop can occur that generates excessive network traffic between an affected device and its upstream ISP's router. This occurs when a link prefix route points to a point-to-point link, a destination IPv6 address belongs to the prefix and is not a lo...
CVE-2021-29302PUBLISHED: 2021-04-12TP-Link TL-WR802N(US), Archer_C50v5_US v4_200 <= 2020.06 contains a buffer overflow vulnerability in the httpd process in the body message. The attack vector is: The attacker can get shell of the router by sending a message through the network, which may lead to remote code execution.
CVE-2021-29357PUBLISHED: 2021-04-12The ECT Provider component in OutSystems Platform Server 10 before 10.0.1104.0 and 11 before 11.9.0 (and LifeTime management console before 11.7.0) allows SSRF for arbitrary outbound HTTP requests.
CVE-2021-3125PUBLISHED: 2021-04-12
In TP-Link TL-XDR3230 < 1.0.12, TL-XDR1850 < 1.0.9, TL-XDR1860 < 1.0.14, TL-XDR3250 < 1.0.2, TL-XDR6060 Turbo < 1.1.8, TL-XDR5430 < 1.0.11, and possibly others, when IPv6 is used, a routing loop can occur that generates excessive network traffic between an affected device and its u...
CVE-2021-3128PUBLISHED: 2021-04-12
In ASUS RT-AX3000, ZenWiFi AX (XT8), RT-AX88U, and other ASUS routers with firmware < 3.0.0.4.386.42095 or < 9.0.0.4.386.41994, when IPv6 is used, a routing loop can occur that generates excessive network traffic between an affected device and its upstream ISP's router. This occurs when a link...
User Rank: Apprentice
11/19/2015 | 6:42:56 PM
As the author notes:
"Any ML system must attempt to separate and differentiate activity based either on pre-defined (i.e. trained learning) or self-learned classifications"
Thus, for a cybersecurity machine learning system to be effective, it must have some principled and structured method to differentiate appropriate and inappropriate access. And, moreover, the system must have the correct context to make such a decision.
The author makes the statement that ML systems struggle to do this:
"Unfortunately, ML systems are not good at describing why a particular activity is anomalous, and how it is related to others. So when the ML system delivers an alert, you still have to do the hard work of understanding whether it is a false positive or not, before trying to understand how the anomaly is related to other activity in the system."
I would point the author to a new line of machine learning algorithms for access auditing called Explanation-Based Auditing.
A detailed peer-reviewed publication can be found at vldb.org/pvldb/vol5/p001_danielfabbri_vldb2012.pdf.
The general idea is to learn why accesses to data occur (e.g., the doctor accessed a record because of an appointment with the patient). This can be modeled as a graph search between the person accessing the data and the data accessed. When such an "explanation" is found, the system can determine the reason for access, filtering it away from manual review.
Thus, as the previous comment states, such as system can remove a tremendous amount of false positives, allowing the privacy or security officer to focus on the unexplained and suspicious.