In February 2024, a finance employee at the multinational firm Arup received an urgent video call from the company’s Chief Financial Officer (CFO). The CFO, appearing on screen with familiar voice and mannerisms, instructed the transfer of $25 million to a new supplier. Trusting the authenticity of the request, the employee complied. It was only later discovered that the CFO had never made the call; the company had fallen victim to a sophisticated deepfake attack, where artificial intelligence (AI) was used to create a convincing but entirely fabricated video (weforum.org).
This incident underscores a growing trend: cybercriminals increasingly leveraging AI to enhance social engineering attacks, making them more convincing and harder to detect.
Social engineering attacks manipulate individuals into divulging confidential information or performing actions that compromise security. Traditionally, these attacks relied on generic phishing emails or phone calls. However, with advancements in AI, attackers can now create highly personalised and realistic content, including deepfake videos and audio, that closely mimic trusted individuals.
For example, in 2024, AI-generated deepfake fraud cases surged, with instances where cybercriminals used AI to clone voices and images, tricking victims into transferring large sums of money or revealing sensitive information. (incode.com)
The rise of AI-powered social engineering attacks presents several challenges for businesses and individuals alike. These threats have grown increasingly credible as deepfakes and other AI-generated content can convincingly mimic real people, making it difficult to distinguish legitimate communications from fraudulent ones. Furthermore, AI enables cybercriminals to automate and scale their operations, allowing them to produce vast numbers of highly personalised phishing messages in a fraction of the time it once took. This automation, combined with the psychological manipulation inherent in these tactics—where victims’ trust in familiar faces and voices is exploited—makes these AI-driven attacks particularly insidious and effective.
Mitigation Strategies
1. Advanced Verification Processes and Better Business Practices:
Implement multi-factor authentication and establish clear, robust protocols for verifying requests—particularly those involving financial transactions. These measures should be supported by sound business processes that help prevent errors or unverified actions. For instance, requiring multiple levels of approval before releasing funds can reduce the risk of a single point of failure. In addition, adhering to recognised security and quality standards—such as ISO/IEC 27001 for information security management systems—provides a structured framework to ensure that business practices align with best security practices.
2. Employee Education:
Regularly train staff to recognise signs of deepfake content and encourage scepticism of unsolicited or unexpected requests. Awareness programmes should include real-world examples, ensuring that employees understand the current threats and how to respond appropriately.
3. Collaboration and Information Sharing:
Engage with industry peers like ITogether.com to stay informed about emerging threats and effective countermeasures. Sharing insights and experiences can help organisations anticipate and respond to new attack techniques more quickly.
The Path Forward
AI’s role in cybercrime is a sobering reminder that technological advancements can be both a force for innovation and a tool for exploitation. Businesses must embrace proactive security measures, foster a culture of vigilance, and adapt their defences to meet the growing threat posed by AI-powered social engineering. By doing so, organisations can not only protect their assets and reputation, but also build resilience against a rapidly evolving landscape of cyber threats.
If you’re concerned about deepfakes, let’s talk. ITogether helps organisations stay ahead of AI-driven threats with smart security strategies, tailored training, and effective verification tools.
👉 Contact us here and let’s make your people your strongest line of defence.
0 Comments