NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk - Artificial Intelligence - NewsNCSC: Chatbot ‘prompt injection’ attacks pose growing security risk - Artificial Intelligence - News

Title: NCSC Issues Alert on Rising Threats of Hacker Manipulation in Chatbots

The (NCSC) in the UK has issued a warning over the escalating risks of chatbots being manipulated by hackers, leading to potential realworld consequences.

**Hackers Targeting Chatbot Vulnerabilities**

The alert comes as concerns rise over the practice of “prompt injection” attacks, where individuals deliberately create input or prompts designed to manipulate the behavior of language models that underpin chatbots.

**Integral Role in Online Applications**

Chatbots have become integral to various applications such as contact banking and shopping due to their capacity to handle simple requests. Large language models (LLMs), including those powering OpenAI’s ChatGPT and Google’s ai chatbot Bard, have been trained extensively on datasets that enable them to generate human-like responses to user prompts.

**Malicious Prompt Injection Risks**

The NCSC has highlighted the risks associated with malicious prompt injection, as chatbots often facilitate the exchange of data with third-party applications and services. Organizations need to be cautious about these vulnerabilities, just like they would if dealing with beta products or libraries.

**Malicious User Inputs and Exploits**

If users input unfamiliar statements or exploit word combinations to override a model’s original script, the model can execute unintended actions. This could potentially lead to:
– Generation of offensive content
– Unauthorized access to confidential information
– Data breaches

**Businesses’ Responsibility in Mitigating Risks**

Oseloka Obiora, CTO at CybSafe, stressed the importance of necessary due diligence checks to minimize the consequences of businesses’ rush to embrace ai. Chatbots have already proven susceptible to manipulation and hijacking for rogue commands, potentially leading to a sharp rise in fraud, illegal transactions, and data breaches.

**Case Study: Microsoft Bing and ChatGPT**

Microsoft’s release of a new version of its Bing search engine and its chatbot drew attention to these risks. A Stanford University student, Kevin Liu, employed prompt injection to expose Bing Chat’s initial prompt. Additionally, security researcher Johann Rehberger discovered that ChatGPT could be manipulated to respond to prompts from unintended sources, opening up possibilities for indirect prompt injection vulnerabilities.

**Mitigating Chatbot Vulnerabilities**

The NCSC advises implementing a holistic system design that considers the risks associated with machine learning components. Combining a rules-based system alongside the machine learning model can help prevent potentially damaging actions and thwart malicious prompt injections.

**Understanding Attacker Techniques**

Mitigating cyberattacks stemming from machine learning vulnerabilities requires understanding the techniques used by attackers and prioritizing security in the design process.

**Jake Moore’s Comment**

Jake Moore, Global Cybersecurity Advisor at ESET, emphasized the importance of security in machine learning applications: “When developing applications with security in mind and understanding the methods attackers use to take advantage of weaknesses in machine learning algorithms, it’s possible to reduce the impact of cyberattacks stemming from ai and machine learning.”

**Conclusion: Guarding Against Evolving Threats**

As chatbots continue to play an integral role in various contact interactions and transactions, the NCSC’s warning serves as a timely reminder of the imperative to guard against evolving cybersecurity threats.

By Kevin Don

Hi, I'm Kevin and I'm passionate about AI technology. I'm amazed by what AI can accomplish and excited about the future with all the new ideas emerging. I'll keep you updated daily on all the latest news about AI technology.