Bias and susceptibility were evident during the 2016 US Presidential election and has plagued much of President Trump's first four years in office. The term "fake news," which years ago would have been considered absurd, is now part of our cultural vernacular. Allegations against foreign-state actors interfering with US elections and conspiracy theories related to COVID-19 has divided a culture, communities, friends, and even families. Social media has become a platform that propagates both real and fake news and has confounded the next generation of fact checkers and truth seekers dedicated to vetting accurate content.
In recent years, the emergence of fake news has brought the concept deep fake to the public spotlight. Deep fake leverages the use of deep learning (machine learning) and artificial intelligence to create, edit, or modify content such as video, audio, or photo artifacts. The intention is to deceive the consumer of information, obfuscating the truth in order to influence behavior or opinion.
Recent examples involve former President Barack Obama, Facebook CEO Mark Zuckerberg, and actor Tom Cruise. While some argue that these are good examples of how quickly deep fake technology has advanced, we also see the potential negative ramifications of this technology.
Prominent female public figures — celebrities and athletes, for example — have been added to deep fake content in pornography. Potential misuse of deep fake can extend far beyond smearing one's character or reputation.
We have also seen the rise of business email compromise (BEC) and advancement in social engineering techniques, such as spear phishing. According to the FBI, BEC scams typically run the gamut from bogus invoice schemes to C-level impersonation, account takeover, attorney impersonation, and data theft.
These scams do not normally have attachments or even links for the user to open and activate. Instead, they prey on user's normalcy bias and the lack of security awareness. Often the request comes with a sense of urgency and a requirement for immediate, expedient action.
It is easy to see why some people would fall victim to these types of scams, because they often include communications that appear to come from trusted or authoritative figures such as the CEO, president, or CFO of an organization. The email request might even contain specific information such as the customer's name, a valid invoice number, and the correct dollar amount.
The credibility of the request might be enhanced further if the person soliciting has made this type of inquiry previously. These types of scenarios play out every day and almost all our technical (security) controls do not prevent these exploits from succeeding.
Safeguarding in a New Era
In order to safeguard against BEC, we often advise our clients to validate the suspicious request by obtaining second-level validations, such as picking up the phone and calling the solicitor directly. Other means of digital communications—cellular text or instant messaging—can be utilized to ensure the validity of the transaction and are highly recommended.
These additional validation measures would normally be enough to thwart scams. As organizations start to elevate security awareness amongst their user community, these types of tricks are becoming less effective. But threat actors are also evolving their strategy and are finding new and novel ways of improving their chances for success. This scenario might seem far-fetched or highly fictionalized, but an attack of this sophistication was executed successfully last year. Could deep fake be utilized to enhance a BEC scam? What if threat actors can gain the ability to synthesize the voice of the company's CEO?
The scam was initially executed utilizing the synthesized voice of a company's executive, demanding the person on the other line to pay an overdue invoice. It was then followed up with an email from the fake executive with accurate financial information and a message reiterating the urgency of making a payment. The attack was successful in parting the victim from their money, and both the attackers and the fund disappeared.
Soon, the rise in scams involving deep fake and deep fraud will increase and its effectiveness will only be limited by the attacker's ingenuity and imagination. Deep fake and fake news have already caught the attention of large companies, Facebook and Google, for instance. Many organizations are joining the effort to enable technology that will detect and weed out fake content.
Three Best Practices to Protect
In the meantime, what can we do to prepare and protect our organizations from sophisticated social engineering techniques?
Introspection is helpful in improving your organization's security posture, as it almost always presents avenues for identifying and remediating gaps in strategy. The defenders are evolving but so are the hackers and the criminals.
Deep fake is coming to an inbox near you. Are you ready?Jon Mendoza is the CISO for Technologent. He has over 24 years of experience in Information Technology and Cybersecurity—and has created security programs for businesses and organizations, leading teams of engineers from various IT disciplines and domains. He has a ... View Full Bio