Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2023-1142PUBLISHED: 2023-03-27In Delta Electronics InfraSuite Device Master versions prior to 1.0.5, an attacker could use URL decoding to retrieve system files, credentials, and bypass authentication resulting in privilege escalation.
CVE-2023-1143PUBLISHED: 2023-03-27In Delta Electronics InfraSuite Device Master versions prior to 1.0.5, an attacker could use Lua scripts, which could allow an attacker to remotely execute arbitrary code.
CVE-2023-1144PUBLISHED: 2023-03-27Delta Electronics InfraSuite Device Master versions prior to 1.0.5 contains an improper access control vulnerability in which an attacker can use the Device-Gateway service and bypass authorization, which could result in privilege escalation.
CVE-2023-1145PUBLISHED: 2023-03-27Delta Electronics InfraSuite Device Master versions prior to 1.0.5 are affected by a deserialization vulnerability targeting the Device-DataCollect service, which could allow deserialization of requests prior to authentication, resulting in remote code execution.
CVE-2023-1655PUBLISHED: 2023-03-27Heap-based Buffer Overflow in GitHub repository gpac/gpac prior to 2.4.0.
User Rank: Author
3/29/2017 | 7:37:11 PM
I agree with your first statement, "humans will always be the easiest attack vector for hackers". But I have increasingly come to realize that your second statement, "we need to continue training users", is not the logical conclusion.
This may seem paradoxical at first: if humans are the weak link, why not train them? But as attacks become more and more sophisticated, the sheer effort of training will become unbearable -- and start paying off less and less. Similarly, as the number of versions of the attacks we see mushroom, it will be harder for regular mortals to keep things straight. And this is what is happening.
So what can we do to deal with the fact that humans are, and will remain, the easiest attack vector? We need software that reflects the perspective of the human victims. What makes people fall for attacks? If we can create filters that identifies what is deceptive -- to people -- then we hare addresssing the problem.
Am I talking about artificial intelligence? Not necessarily. This can be solved using expert system, machine learning, and combinations thereof. What I am really talking about is software that interprets things like people do, and then filters out what is risky. Can we call this "artificial empathy"?