CFOtech New Zealand - Technology news for CFOs & financial decision-makers
Corporate security operations room flat vector wall screens data

AI, compliance and security trends for 2026

Wed, 10th Dec 2025

Fast growing AI startup ORCA Opti has frontline experience when it comes to the AI innovation impacting businesses across Australia. The company's CTO, Phoenix Guy, and Founder and Managing Director, Kathryn Giudes, have identified several critical trends that will shape how organisations approach AI adoption, security and compliance in the year ahead.

Trust Will Become the Deciding Product Feature

According to Kathryn Giudes, Founder and Managing Director of ORCA Opti, The acceleration of technology adoption has reached unprecedented levels. While the telephone took decades to reach mainstream use, ChatGPT achieved widespread adoption in just six months. This compression creates both opportunity and risk.

Giudes argues that trust has emerged as the primary competitive differentiator. Organisations that can demonstrate robust privacy protections, secure trade secrets and maintain compliance without sacrificing speed will win customers and partnerships.

"Trust is the new product feature," Giudes says. "Customers and partners will choose teams who protect privacy, keep trade secrets safe and maintain compliance without slowing down delivery. In the agentic-AI era, trust isn't a checkbox. It's your competitive edge."

Real-Time Compliance Will Replace Manual Reporting Cycles

Traditional compliance models create significant bottlenecks. Organisations typically spend weeks on documentation and face five-figure costs per assessment cycle. Giudes predicts this model will give way to automated, real-time compliance systems.

Systems built on ISO 9001 and ISO 27001 foundations can shift organisations from monthly manual reporting to continuous conformance monitoring. The benefits are substantial.

"Organisations can see up to 90% reduction in compliance overhead while maintaining audit readiness, translating to more than $200,000 annual savings for organisations of about 100 people," Giudes notes. "This is based on implementations to date; results vary by environment and scope, but the direction is clear: automation reduces friction without reducing rigour."

Security and Innovation Cycles Will Compress Into Single-Day Operations

The traditional model of sequential phases (innovate, then secure, then quality assure) cannot survive in an environment where releases happen weekly or daily. Giudes predicts successful organisations will collapse these timelines dramatically.

She points to practical implementations as evidence this shift is already underway. For the State Library of Queensland's Virtual Veterans project, ORCA Opti built Charlie, a WWI conversational agent that handled more than 10,000 attempted prompt-injection attacks in the first 72 hours while maintaining stable character integrity across 50,000 interactions.

"The future belongs to teams who compress innovation, security validation and quality assurance into the same day," Giudes says.

AI Will Move From Informational to Operational

Agentic AI systems that perform tasks rather than simply answer questions represent a fundamental shift in how organisations operate. These systems are already drafting contracts, triaging incidents and summarising risk.

This operational shift changes the compliance equation. Giudes sees privacy evolving from policy to daily habit, security shifting from annual audits to live monitoring, and compliance moving from manual reporting to automated assurance.

"When AI moves from 'informational' to 'operational,' the stakes rise," she says. The implication is that organisations must build guardrails into their systems from the beginning, not retrofit them later.

AI Agent Decisions Will Cause Major Brand Damage

Phoenix Guy, CTO of ORCA Opti, works exclusively with AI agents from morning to night, building solutions that follow operating procedures and security principles. This daily immersion has given him an unvarnished view of both AI's capabilities and its flaws.

His primary prediction stems directly from this experience: AI agents making autonomous business decisions will cause significant reputational damage to at least one major brand within 18 months.

"My predictions are there will be at least one major brand who is going to get some very adverse reputational damage in the market due to a decision made by an AI bot," Guy says. "I would be very surprised if that doesn't happen at least once over the next 18 months. It will likely be more than that."

Guy sees a disconnect between AI's brilliance and its judgment. While AI can be the most talented tool he's worked with in terms of knowledge, it also sometimes does things that even junior developers would avoid.

"AI is simultaneously brilliant and sometimes does really stupid things. When you're seeing the brilliant side of it, you go, 'Wow, this is amazing therefore I should trust it', which is a concern we need to be aware of," he says. 

Guy predicts the fallout will trigger broader market reactions, including a backlash that prompts organisations to review their AI usage policies across the board.

Cyber Attacks Using AI Tools Will Breach Major Organisations

The second prediction focuses on cyber security. Guy expects AI to amplify both offensive and defensive capabilities, but believes defence will lag dangerously behind.

"Over the next 18 months there's going to be at least one major brand who gets attacked from black hat actors who are using really really powerful AI tools to really allow them to just find any vulnerabilities that are there," he says.

"We're also going to see a lot more AI not only on the attack side but also on the defence side in the world of cyber security. But I think the defence side is probably going to lag a little bit." 

Deep Fake Social Engineering Scams Will Escalate

Guy's third prediction concerns the weaponisation of deep fake technology for fraud and manipulation. The shift has already begun, moving from theoretical concern to active threat.

"I think there's over $25 million worth of known deep fake scams having occurred over the past 12 months," he says. "So, this is on the rise and it's only going to get worse."

The applications range from financial fraud targeting individuals to broader efforts at swaying political opinion. Guy sees this evolving from an interesting technical problem into a security imperative.

"It's taken the shift from deep fakes being a problem in theory, to becoming a very real security imperative. That will occur quite soon," he says.

The underlying challenge, according to Guy, is that organisations and individuals are not yet equipped to distinguish sophisticated AI-generated content from authentic material. As the technology improves and becomes more accessible, the vulnerability window expands.