In any profession, and in security specifically, it is the understanding and “nature” of the professional which is the most difficult to develop. Knowledge gaps can always be closed, but this unnamed “grit” sometimes seems untrainable.
When Facebook created Groups it encountered challenges every such system has encountered in the past, such as spam and spam bots. Facebook is highly active and capable in countering these, but they also enlisted the help of its users by empowering them to make decisions on their own.
A Group administrator faces the decision of whom to allow to join their Group, daily. Over time, Facebook started displaying some parameters which make the decision much easier. Aside from showing the name of the person (with a link to their profile, so that you can examine it), it added immediately observable parameters, which make the decision of triage much easier.
Mutual friends and friends in Group
If the person already has mutual friends with you, it is much easier for you to estimate they are a real user. You could even go and ask about them, although the very point of these parameters is to allow you to make a less time consuming decision. Friends in Group is similar, and unless your group has entirely been taken over by spam bots (which give each other social proof and make this parameter useless) it shows they are potentially relevant to, for example, the topic of discussion.
Time on Facebook and Group Membership
Facebook also shows you when the profile was created. It used to be that if a profile is older than a month, it was fine. Nowadays, younger than a year is a red flag. The other parameter, “Member of” displays how many groups the user is a member of. The current effective rule of thumb for this parameter to contribute to our risk profile is if the user is a member of too few groups, or too many. It isn’t reliable by itself though.
If you can afford to delay the decision and wait a couple of days before approving a new user to join your group, you provide Facebook time for its other anti-fraud systems to identify that the account is phony. Others already reported the abuser as a fake account. The name appears in black instead of a clickable blue, leading to their profile.
This system for filtering new member requests is pretty neat, but the reason I like it is not because of how it counters fake profiles and spammers, but because of how it trains a multitude of Facebook users on how to read data points, create a risk profile in their heads, watch for changes over time, and make a decision to protect themselves and their group.
Thus, a group admin and often even group members effectively become security intelligence analysts, such as someone working in anti-fraud, and develop the understanding, feel, or “grit” on how to make informed security decisions, which is great training as analysts-to-be. Hopefully, it also lets them make better security decisions for their own daily digital lives.
- How Facebook Bakes Security Into Corporate Culture
- 'Threat Hunting' On The Rise
- Facebook And National Security: Two Cases