Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2023-33196PUBLISHED: 2023-05-26Craft is a CMS for creating custom digital experiences. Cross site scripting (XSS) can be triggered by review volumes. This issue has been fixed in version 4.4.7.
CVE-2023-33185PUBLISHED: 2023-05-26
Django-SES is a drop-in mail backend for Django. The django_ses library implements a mail backend for Django using AWS Simple Email Service. The library exports the `SESEventWebhookView class` intended to receive signed requests from AWS to handle email bounces, subscriptions, etc. These requests ar...
CVE-2023-33187PUBLISHED: 2023-05-26
Highlight is an open source, full-stack monitoring platform. Highlight may record passwords on customer deployments when a password html input is switched to `type="text"` via a javascript "Show Password" button. This differs from the expected behavior which always obfuscates `ty...
CVE-2023-33194PUBLISHED: 2023-05-26
Craft is a CMS for creating custom digital experiences on the web.The platform does not filter input and encode output in Quick Post validation error message, which can deliver an XSS payload. Old CVE fixed the XSS in label HTML but didn’t fix it when clicking save. This issue was...
CVE-2023-2879PUBLISHED: 2023-05-26GDSDB infinite loop in Wireshark 4.0.0 to 4.0.5 and 3.6.0 to 3.6.13 allows denial of service via packet injection or crafted capture file
User Rank: Author
3/29/2017 | 7:37:11 PM
I agree with your first statement, "humans will always be the easiest attack vector for hackers". But I have increasingly come to realize that your second statement, "we need to continue training users", is not the logical conclusion.
This may seem paradoxical at first: if humans are the weak link, why not train them? But as attacks become more and more sophisticated, the sheer effort of training will become unbearable -- and start paying off less and less. Similarly, as the number of versions of the attacks we see mushroom, it will be harder for regular mortals to keep things straight. And this is what is happening.
So what can we do to deal with the fact that humans are, and will remain, the easiest attack vector? We need software that reflects the perspective of the human victims. What makes people fall for attacks? If we can create filters that identifies what is deceptive -- to people -- then we hare addresssing the problem.
Am I talking about artificial intelligence? Not necessarily. This can be solved using expert system, machine learning, and combinations thereof. What I am really talking about is software that interprets things like people do, and then filters out what is risky. Can we call this "artificial empathy"?