Deep Fake: Setting the Stage for Next-Gen Social Engineering
Humans are susceptible to normalcy bias, which may leave us vulnerable to disinformation that reinforces our beliefs.
Bias and susceptibility were evident during the 2016 US Presidential election and has plagued much of President Trump's first four years in office. The term "fake news," which years ago would have been considered absurd, is now part of our cultural vernacular. Allegations against foreign-state actors interfering with US elections and conspiracy theories related to COVID-19 has divided a culture, communities, friends, and even families. Social media has become a platform that propagates both real and fake news and has confounded the next generation of fact checkers and truth seekers dedicated to vetting accurate content.
"Deep Fake"
In recent years, the emergence of fake news has brought the concept deep fake to the public spotlight. Deep fake leverages the use of deep learning (machine learning) and artificial intelligence to create, edit, or modify content such as video, audio, or photo artifacts. The intention is to deceive the consumer of information, obfuscating the truth in order to influence behavior or opinion.
Recent examples involve former President Barack Obama, Facebook CEO Mark Zuckerberg, and actor Tom Cruise. While some argue that these are good examples of how quickly deep fake technology has advanced, we also see the potential negative ramifications of this technology.
Prominent female public figures — celebrities and athletes, for example — have been added to deep fake content in pornography. Potential misuse of deep fake can extend far beyond smearing one's character or reputation.
We have also seen the rise of business email compromise (BEC) and advancement in social engineering techniques, such as spear phishing. According to the FBI, BEC scams typically run the gamut from bogus invoice schemes to C-level impersonation, account takeover, attorney impersonation, and data theft.
These scams do not normally have attachments or even links for the user to open and activate. Instead, they prey on user's normalcy bias and the lack of security awareness. Often the request comes with a sense of urgency and a requirement for immediate, expedient action.
It is easy to see why some people would fall victim to these types of scams, because they often include communications that appear to come from trusted or authoritative figures such as the CEO, president, or CFO of an organization. The email request might even contain specific information such as the customer's name, a valid invoice number, and the correct dollar amount.
The credibility of the request might be enhanced further if the person soliciting has made this type of inquiry previously. These types of scenarios play out every day and almost all our technical (security) controls do not prevent these exploits from succeeding.
Safeguarding in a New Era
In order to safeguard against BEC, we often advise our clients to validate the suspicious request by obtaining second-level validations, such as picking up the phone and calling the solicitor directly. Other means of digital communications—cellular text or instant messaging—can be utilized to ensure the validity of the transaction and are highly recommended.
These additional validation measures would normally be enough to thwart scams. As organizations start to elevate security awareness amongst their user community, these types of tricks are becoming less effective. But threat actors are also evolving their strategy and are finding new and novel ways of improving their chances for success. This scenario might seem far-fetched or highly fictionalized, but an attack of this sophistication was executed successfully last year. Could deep fake be utilized to enhance a BEC scam? What if threat actors can gain the ability to synthesize the voice of the company's CEO?
The scam was initially executed utilizing the synthesized voice of a company's executive, demanding the person on the other line to pay an overdue invoice. It was then followed up with an email from the fake executive with accurate financial information and a message reiterating the urgency of making a payment. The attack was successful in parting the victim from their money, and both the attackers and the fund disappeared.
Soon, the rise in scams involving deep fake and deep fraud will increase and its effectiveness will only be limited by the attacker's ingenuity and imagination. Deep fake and fake news have already caught the attention of large companies, Facebook and Google, for instance. Many organizations are joining the effort to enable technology that will detect and weed out fake content.
Three Best Practices to Protect
In the meantime, what can we do to prepare and protect our organizations from sophisticated social engineering techniques?
If your organization/company has not done so already, enable and integrate single sign-on and multi-factor authentication for your critical applications and services. Review how your organization provisions and de-provisions its users.
Ensure that your organization has a robust password policy, one that is not so obtrusive that it is rendered ineffective but not so permissive that it is easy to nullify. Get into the habit of continuously reviewing your policies and guidelines to ensure that they match your organization's culture and users.
Establish protocols for urgent ad-hoc requests, perhaps requiring approval from two key approvers before a request is successfully processed. Consider out-of-band channel communications and utilizing share secret/passcode to validate the authenticity of the individual on the other end.
Introspection is helpful in improving your organization's security posture, as it almost always presents avenues for identifying and remediating gaps in strategy. The defenders are evolving but so are the hackers and the criminals.
Deep fake is coming to an inbox near you. Are you ready?
About the Author
You May Also Like
A Cyber Pros' Guide to Navigating Emerging Privacy Regulation
Dec 10, 2024Identifying the Cybersecurity Metrics that Actually Matter
Dec 11, 2024The Current State of AI Adoption in Cybersecurity, Including its Opportunities
Dec 12, 2024Cybersecurity Day: How to Automate Security Analytics with AI and ML
Dec 17, 2024The Dirt on ROT Data
Dec 18, 2024