Ivanti research reveals escalating 'AI anxiety' in workforce
Research revealed by Ivanti has highlighted union in growing concerns among office and IT workers. A total of 36% of employees are deeply apprehensive that generative Artificial Intelligence (AI) tools will replace their jobs. This concern has been termed as 'AI anxiety' and is increasingly prevalent amongst society and the workforce.
While the majority of office workers believe that generative AI will benefit their employers rather than the individual employee, IT workers express a higher level of anxiety with 36% fearing job loss due to AI tools in the next five years. This figure is 17 percentage points higher than that observed among office workers.
Nearly half (48%) of office workers are of the view that AI will be advantageous for employers, but only 8% think it will benefit the individual employees. Just 10% of knowledge workers anticipate AI to bring about "high improvement" in productivity.
Interestingly, there is a significant gap between the beliefs of executives and IT professionals. While a 60% majority of the C-suite think AI will heighten employee productivity, less than half (47%) of IT workers share this belief. This divergence in perspectives reveals a lack of alignment on how AI will reshape the workplace.
Meanwhile, company leaders in the same study reported that the top benefits of AI include automation of mundane tasks (62%) and enhancement in employee productivity (60%), indicating a significant disparity in outlook between leadership and the workforce. Dr. Srinivas Mukkamala, Chief Product Officer at Ivanti, posited that employees show a clear divide on what AI means for their careers.
Dr Mukkamala emphasises that proclaiming the virtues of AI is not enough - organisations need to articulate a clear communication strategy about how AI impacts the future of employee experience, productivity, and career progression.
"Without employee support and oversight of generative AI, companies will be slow to leverage the gains and may face unintended consequences without necessary human oversight," said Dr. Mukkamala.
Dr Mukkamala, who has previously advised the US Congress on AI, recommends that organisations consider closely monitoring the impact of AI with a dedicated internal task force as Ivanti has, designing a stringent code of AI ethics, making AI a fundamental part of their CSR initiatives, introductions of ethical AI programmes, and developing a robust AI-specific talent development programme.
As author of the report, Ivanti emphasises the need to identify roles in organisations that are highly impacted by generative AI, surveying these individuals to understand their willingness to learn new skills and offering them choices for learning and advancement.
"All organisations need to ensure the security, safety, and privacy of the data that is collected and fed into AI systems," continued Dr Mukkamala.
"Besides protecting the data against misuse, malicious intent, or threat actors, erecting necessary guardrails against AI bias, data bias, model or algorithmic bias, and human bias are equally pivotal. It is essential to establish global trust with proactive security and AI system resilience."