AI-Fueled Attacks are Exposing The Soft Underbelly of Email

AI is learning how to bypass security mechanisms as email struggles to keep pace with evolving cyberattacks.

Computer Crime Concept 516607038 2125x1416 (1)

When it comes to security, email still remains vulnerable. The direct access it provides to an entire organization and its oversized role in communications makes it a prime target for malicious actors. Email defenses are struggling to keep pace with the evolving nature of cyberattacks, particularly as AI tools are increasingly being used to bypass detection mechanisms.

A surge in sophisticated email-based threats calls for more than just advanced security measures; it raises the need for organizations to rely more on employees to do their fair share as vigilant security stewards. Employees have to serve like human firewalls, exercising keen judgment, heightened awareness, and honed skills for detecting a constant wave of inbound threats.  

The Soft Underbelly of Email 

A majority of organizations consider email security a top priority. There are a number of reasons for this: 

  • Email provides direct contact to employees deep within the organization. While it's difficult to locate phone numbers, if you have someone’s name, chances are you can guess their email address. 
  • Once bad actors compromise a person’s credentials, they can access everything (via single sign-on) that the employee has access to in an organization.  
  • Email serves as a primary communication funnel for employees to interact with external parties and internal business processes and systems. 
  • Email can be easily exploited for malicious purposes. If a malicious actor is able to send something directly to a victim's inbox, they can prompt the victim to engage in harmful activities such as sharing sensitive data and credentials, taking control of systems and devices, or escalating privileges. 

Despite email security being a top priority for security professionals, the actual situation paints a grim picture:  In the first half of 2024, there was a 293 percent increase in attacks compared to 2023. 

Traditional email security solutions detect email-borne threats using techniques such as:  

  • Signatures: Is this a known threat?
  • Blacklisting: Is this email from a malicious domain?
  • Whitelisting: Is this email from a trusted domain? 

Modern-day attackers know the limitations of these tools and use social engineering techniques such as business email compromise (BEC) and executive impersonations to circumvent these detection methods. Traditional email-borne threats may consist of weaponized attachments or macros. Such malicious attachments (or macros) leave a signature that can be detected by email security gateways.  

However, modern cyberattacks may not arrive with malicious attachments -- they come with malicious intent. Social engineers manipulate victims into giving up their access, credentials, and sensitive data. Thus, fewer malicious signals and markers in these email messages are coming through, further impacting the efficacy of security gateways.  

How AI is Advancing Email-based Threats 

Even though attackers constantly invent new ways of carrying out phishing attacks, phishing red flags are pretty easy to spot. For example, one of the most obvious red flags are grammatical errors. Awkward phrases, strange sentence construction, unusual choice of words, can make it easy to spot a phish. These red flags are fast disappearing because AI tools (like ChatGPT) can easily clean up grammatical errors, mimic someone’s writing style, or compose extremely targeted and contextual emails.  

Some may argue that AI tools are equipped with built-in security restrictions that prevent them from responding to malicious inputs. Yet threat actors have already developed jailbreaking techniques to circumvent these guardrails. There are also a number of uncensored AI tools that bad actors can use without having to worry about content moderation.  

Listed below are some best practices that can mitigate new-age, AI-powered email threats: 

  1. Better Visibility: Conduct a comprehensive assessment to understand the current efficacy of your email security. If your current tools lack visibility and are unable to provide granular security insights, then consider using AI-based email security tools that can analyze the behavior and context of emails to better detect evasive threats. 
  2. Processes and People: Two important components are needed to detect email-borne threats – processes and people. Human intuition is more powerful than any AI system because people can apply clarity around how to handle, manage, and report email- borne threats. Continuous security training and education can truly serve to protect users by encouraging (and prodding) them to follow security best practices and protocols. 
  3. Detection Beyond Email: Social engineering threats extend beyond email to target employees through SMS text, social media, collaborative platforms such as Microsoft Teams or Slack, malicious QR codes, counterfeit websites, deepfakes and more. Organizations must have the ability to detect anomalous patterns across multiple attack surfaces. Again, this is where security training is justified because security tools alone (without human judgment and intuition) are inherently limited. 
  4. Human Expertise: The prevalence of AI doesn’t eliminate the need for cybersecurity expertise. Security professionals understand the context and the intricacies of a threat. While AI models can be trained to detect certain types of threats, cybersecurity professionals can complement threat hunting with their own expertise and oversight. 

In summary, email attacks have evolved, and AI has facilitated new attack methods. On the defensive side, AI will play a big role in identifying and mitigating these emerging threats. Organizations must rely on their employees to engage their intuition, critical thinking, and common sense in support of email threat detection. 

More in Safety