Don't Automate: Penetration Testing (this is where I disagree)
Pen Testing should be automated because the purpose of a PenTest is to identify the vulnerability and to make it known to the end-user so some proactive measure can be taken or done (remediation). The problem I have with the comment is that everyone is aware that there is no silver-bullet to address all of the areas of security, but I do think automation would help to reduce the threat matrix because systems would start learning from one another. Yes, I do believe that human intervention is needed (only in certain cases, one-offs) to be part of the process but this can be improved overtime where the machine understands that certain processes we kick off are not intended to cause harm, this is the learning process. As noted, machines will miss certain things but that is where the learning comes into play along with system updates. Remember, we (humans & machine) will miss somethings but by creating a mature model where systems can learning in cyber-security arena, we can reduce the number of false-positives to a minimum.
To address Boyce's or the moderator's concern, we have something called "Continuous Monitoring" or SIEM where an ongoing analysis of the various sessions and potential threats are steadily being monitored. The problem is what happens when that person gets tired or is not at the office (late night, a skeleton crew is not as talented as the group that is there during the day). There needs to be some intelligence in the decision-making process where the threats are prioritized based on their level of priority. In addition, the system needs to be able to adjust (sliding scale) on the fly where if there are threats that are not as nefarious as the prior or post threat, then the system needs to be able to adjust that sliding scale and move that threat up the ladder of discernment and priority. Then from that classification, the system needs to be able to pull data from external sources where potential resolutions could be matched against the potential threat. Finally, the system needs to be associated with a percentage based on the level of accuracy to resolve the issue (100%, 80%, 70% resolution scale) where the system is able to learn based on recreating the vulnerability in a virtual mock environment.
The solution could be validated in seconds by the machine learning process (whereby the system actually learns how to mitigate the problem based on external repositories or tactics it has learned on its on by breaking down, analyzing, resolving and then reporting on the threat that it found to be an anomaly in the scheme of things (i.e. a legitimate file was written to by a variant where the variant injected code into the registry, kernel or system file, the system was able to identify what was written to the file and remove the code that was written to the file or replaces the system file with a baseline file that was validated as having the correct MD5 checksum), this would be perceived as a win-win for all parties involved (except for the actor who was trying to access the environment in the first place).
It would take a human, days to figure out the type of attack, what was changed, what was affected, the remediation techniques and report on the incident where the machine could take minutes or seconds to do (there will be some tweaking but not only could we reduce the number of possible security vulerabilities but system crashes and application failures as well).
If we were able to reduce both then the attack vectors and threat landscape, this would improve the overall process 4 fold because the machines would be able to point out the contention, identify the resolution, test the resolution, implement the resolution and report on the solution, this would prove extremely beneficial to allow the end-user to concentrate on other tasks. In addition, if it was a false positive, it would be up to us to update the process so to address the anomaly in a more efficient manner (the end-user could identify if this was an anomaly that needed to be reviewed expeditiously or if the methods of remediation were correct). This process could be shared by other machines where the machines learn from one another. Moreover, we could also develop a process where machines provided health information - Security or functional processing levels - then we could improve every aspect of the computing process.
Todd
User Rank: Moderator
7/16/2018 | 3:33:18 AM