CFOtech New Zealand - Technology news for CFOs & financial decision-makers
Story image

How businesses can strengthen defences against evolving cyberattacks

Fri, 21st Jun 2024

The digital landscape is undergoing a paradigm shift. Artificial intelligence (AI) is rapidly transforming industries, empowering businesses with automation, data-driven insights, and enhanced decision-making.

However, this powerful technology has also caught the eye of malicious actors, who are wielding AI to orchestrate sophisticated cyberattacks.

A growing threat
AI's integration into cybercrime strategies poses a significant challenge for businesses. Phishing campaigns, a mainstay of cyberattacks, are being weaponised with AI to craft hyper-personalised emails that mimic legitimate senders with uncanny accuracy.

The devastating potential of AI attacks was exemplified earlier this year when a Hong Kong-based executive, duped by a deepfake video conference, authorised a fraudulent transfer of $US235 million. The deepfake, featuring a realistic impersonation of the company's CFO, highlights the chilling potential of AI-powered social engineering tactics.

Beyond video manipulation, voice cloning is another emerging threat. Tools now require a mere 30 seconds of audio to generate convincing voice forgeries, capable of uttering fabricated statements with unsettling believability.

A recent Ping Identity survey underscores this growing concern, with 54% of respondents identifying AI as the area of most significant cybersecurity apprehension. This surpasses traditional threats like malware, phishing attacks, and credential compromise, all of which remain prevalent.

AI's role in amplifying existing threats
AI is not merely creating new attack vectors - it's also amplifying the effectiveness of existing tactics. From classic romance scams on social media to fraudulent wire transfers targeting homebuyers, AI is making these cons more believable and pervasive.

Deepfakes lend a veneer of legitimacy to fabricated voices and visuals, significantly increasing the likelihood of a victim falling prey.

The financial implications are particularly concerning. In many jurisdictions, regulations mandate compensation for victims of scams, placing increased pressure on banks and financial institutions to bolster customer protection measures.

Security tools falling short in the AI era
The fight against AI-powered cybercrime necessitates a re-evaluation of traditional security tools. Many legacy systems suffer from limitations that render them inadequate in the face of this evolving threat landscape. Reasons for this include:

  • Static vs. dynamic: Legacy tools often rely on a one-time assessment of user identity and risk level, failing to dynamically monitor ongoing activity. Modern security solutions, on the other hand, continuously track user behaviour across IT infrastructure, providing a more comprehensive picture.
  • Real-time limitations: Traditional systems frequently lack real-time capabilities. By the time a fraudulent transaction is detected, recouping funds becomes a monumental challenge. Real-time threat detection and mitigation are crucial for safeguarding financial assets.
  • Liveness detection: While deepfakes can deceive humans, robust security tools should be able to distinguish between a real person and a synthetic image or video. Liveness detection technology plays a vital role in mitigating deepfake-based exploits.
  • Integration woes: Disparate security tools operating in silos often impede information sharing, obstructing the creation of a holistic view of potential threats. Integrated security platforms enable data exchange, fostering a more co-ordinated defence strategy.
  • Inflexible mitigation options: Traditional tools may offer limited response options, leaving security teams with blunt instruments to address complex attacks. Modern AI-powered solutions provide a more granular approach, allowing for tailored responses to specific threats.
  • Lack of AI integration: The absence of AI capabilities in legacy systems leaves them ill-equipped to contend with AI-driven attacks. Integrating AI into security solutions empowers teams to analyse threat patterns and anomalies effectively, enabling proactive identification and mitigation of risks.

Turning the tables on cybercriminals
Despite these challenges, AI offers a powerful countermeasure to AI-driven cybercrime. AI can be leveraged to fortify defences in a number of ways. They include:

  • Tackling unknown threats: AI's ability to analyse vast amounts of data and identify patterns enables it to detect and respond to novel threats that traditional signature-based systems might miss. This proactive approach can significantly bolster security posture.
  • Enhanced threat intelligence: AI can be used to analyse threat intelligence feeds, gleaning insights from dark web chatter, malware samples, and security research findings. This comprehensive threat picture empowers security teams to anticipate and prepare for emerging attack methods.
  • Automating security tasks: AI can automate mundane security tasks like log analysis and incident response, freeing up valuable time for security personnel to focus on strategic initiatives and complex investigations.

An Ongoing Arms Race
The battle lines are drawn. As AI continues to evolve, so too will the sophistication of AI-powered cyberattacks, and businesses must stay ahead of the curve by embracing AI-powered security solutions.

By integrating these solutions, businesses can build a robust defence against the evolving threats emerging in the digital age.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X