Cloud email security provider Ironscales added a new arrow to its quiver: Themis Co-pilot for Microsoft Outlook, a self-service threat-reporting chatbot powered by PhishLLM, the company's own large language model (LLM).
Business email compromise (BEC) is a large and growing problem in which attackers use deception and impersonation to fool users into giving away personally identifiable information (PII), corporate assets, and even money. According to the FBI, BEC has cost businesses $50 billion globally in the past 10 years, with a 17% year-over-year growth in losses in 2022. And Verizon's "2023 Data Breach Investigations Report" (DBIR) reported that over the past year, 74% of breaches involved human judgment.
Microsoft Outlook addresses BEC with a Report Phish button, which sends a report to system admins but depends on the user being able to discern a phishing email. To that end, Ironsides developed Themis Co-pilot to flag suspicious emails and lead recipients through a dialogue explaining how to tell if something is a phishing lure.
Themis Co-pilot taps into Ironscales' Themis AI security data set, which was built from millions of security events and — like ChatGPT and other well-known generative artificial intelligence (AI) tools — employs reinforcement learning from human feedback (RLHF) to keep current. Unlike ChatGPT, Themis AI and PhishLLM are proprietary and held only by Ironscales, which eliminates many of the privacy and confidentiality concerns AI tools raise when employees feed back new data.
The idea behind RLHF is that by feeding new phishing examples and human decisions back into PhishLLM via Themis AI, the actions of Themis Co-pilot will become more accurate. This virtuous cycle will, ideally, not only result in fewer false positives for the security team to clear, but can help train users to learn what to look for in a phishing attempt.
Themis Co-pilot is available now in closed beta from Ironscales.