Enterprises struggle to address generative AI’s security implications - Artificial Intelligence - NewsEnterprises struggle to address generative AI’s security implications - Artificial Intelligence - News

Cloud-Native Network Detection and Response Firm Unveils Concerns over Employee Generative ai Use

A recent study by a cloud-native network detection and response firm has revealed a concerning trend: enterprises are struggling with the security implications of employee generative ai (ai) use.

Enterprises Facing Challenges as Generative ai Technology Becomes More Prevalent

Their new research report, shedding light on the challenges faced by organisations as generative ai technology becomes more prevalent in the workplace.

Cognitive Dissonance Among IT and Security Leaders

The report reveals a significant cognitive dissonance among IT and security leaders, as 73% admitted that their employees frequently use generative ai tools or Large Language Models (LLM) at work. However, the majority confessed to being uncertain about how to effectively address the associated security risks.

Concerns and Solutions

When questioned about their concerns, IT and security leaders expressed more worry about the possibility of inaccurate or nonsensical responses (40%) than critical security issues such as exposure of customer and employee personal identifiable information (PII) (36%) or financial loss (25%).

Raja Mukerji, Co-Founder and Chief Scientist at ExtraHop:

“By blending innovation with strong safeguards, generative ai will continue to be a force that will uplevel entire industries in the years to come.”

Ineffectiveness of Generative ai Bans

About 32% of respondents stated that their organizations had prohibited the use of these tools. However, only 5% reported that employees never used these tools—indicating that bans alone are not enough to curb their usage.

Desire for Guidance

The study highlights a clear desire for guidance, particularly from government bodies. A significant 90% of respondents expressed the need for government involvement, with 60% advocating for mandatory regulations and 30% supporting government standards for businesses to adopt voluntarily.

Gaps in Basic Security Practices

Despite a sense of confidence in their current security infrastructure, the study revealed gaps in basic security practices. While 82% felt confident in their security stack’s ability to protect against generative ai threats, less than half had invested in technology to monitor generative ai use. Alarmingly, only 46% had established policies governing acceptable use and merely 42% provided training to users on the safe use of these tools.

Rapid Adoption of Technologies

The findings come in the wake of the rapid adoption of technologies like ChatGPT, which have become an integral part of modern businesses. Business leaders are urged to understand their employees’ generative ai usage to identify potential security vulnerabilities.

ExtraHop Generative ai Report

Find a full copy of the report.

technology-forefront-chatbots-and-generative-ai/” target=”_blank” rel=”noopener”>Explore other upcoming enterprise technology events and webinars powered by TechForge

By Kevin Don

Hi, I'm Kevin and I'm passionate about AI technology. I'm amazed by what AI can accomplish and excited about the future with all the new ideas emerging. I'll keep you updated daily on all the latest news about AI technology.