CFOtech New Zealand - Technology news for CFOs & financial decision-makers
Story image

Greater AI literacy & policies needed among Kiwi business leaders

Wed, 11th Oct 2023
FYI, this story is more than a year old

A new survey has raised warnings for business leaders regarding their awareness of Artificial Intelligence (AI) use within the workplace. Taking into account the increased use of AI tools by a younger generation, cyber experts are calling for leaders to comprehend the risks and repercussions of AI misuse within their workforce.

The independent survey, commissioned by Kordia and conducted by Perceptive, revealed that 53% of Gen Z respondents had used generative AI tools like ChatGPT or DalleE, with over a quarter indicating that they use such services in their place of work. However, alarmingly, just one in every five are aware of the cyber security and privacy threats connected to AI.

According to Alastair Miller, Principal Consultant and Aura Information Security, the older generation of business leaders needs to "upskill their knowledge and become AI literate," considering that Gen Z employees are already widely using these technological tools. Miller highlighted the necessity in ensuring youth employees are fully briefed on what constitutes acceptable AI usage and the risks triggered if it is used improperly.

The study also draws out high-profile incidences such as Samsung's case, where employees inadvertently leaked source code and internal meeting minutes while using a public AI tool, ChatGPT, to summarise information. "Once data is entered into a public AI tool, it becomes part of a pool of training data – and there's no telling who might be able to access it. That's why businesses need to ensure generative AI is used appropriately, and any sensitive data or personal information is kept away from these tools," explains Miller.

Furthermore, the survey reemphasizes the lack of public awareness regarding the whole spectrum of issues related to the usage of generative AI tools. Notably, only one in five respondents across all age groups alerted their knowledge of any cyber security or privacy risk associated with AI. Additionally, less than a third were apprehensive that AI may produce inaccurate information and may be used for misinformation campaigns.

Commenting on the survey results, Miller cautioned against entrusting data sets to public generative AI tools. He advised special care even with private AI tools and stressed the necessity of conducting a security assessment prior to entrusting company data to ensure there are protective mechanisms set in place.

Despite portraying the significant risks connected with AI, Miller also recognized the benefits AI bring to businesses in terms of innovation and productivity. However, he emphasized that the optimal means of utilising AI safely is by drafting an AI policy that outlines strategic AI usage and provides guidelines for employees. The survey report stated that only 12% of businesses have such policies in place, implying the need for significant progress in this area.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X