• Skip to primary navigation
  • Skip to main content
  • Skip to footer
Cyara

Cyara

Cyara Customer Experience Assurance Platform

  • Login
  • Contact Us
  • Request a demo
  • Login
  • Contact Us
  • Request a Demo
  • Why Cyara
    • AI-Led CX Assurance Platform
    • AI vision for CX
    • Cyara partner network
    • Cyara Academy
  • Solutions
    • Transform
          • TRANSFORM – Drive CX Change

          • Functional, regression, & objective testing | Cyara Velocity
          • Performance testing | Cyara Cruncher
          • See all use cases >
          • Cyara platform - Transform - Drive CX change
    • Monitor
          • MONITOR – Assure CX Journeys

          • CX monitoring | Cyara Pulse
          • Telecom assurance | Cyara Voice Assure
          • CX & telecom monitoring | Cyara Pulse 360
          • Call ID line assurance | Cyara Number Trust
          • Agent environment assurance | Cyara ResolveAX
          • See all use cases >
          • Cyara platform - Monitor - Assure CX journeys
    • Optimize
          • OPTIMIZE — Leverage AI for CX

          • Conversational AI optimization | Cyara Botium
          • Generative AI assurance | Cyara AI Trust
          • See all use cases >
          • Cyara platform - Optimize - Leverage AI for CX
    • Connect
          • CONNECT — Assure WebRTC CX

          • WebRTC optimization | Cyara testRTC
          • WebRTC monitoring | Cyara watchRTC
          • WebRTC quality assurance | Cyara qualityRTC
          • See all use cases >
          • Cyara platform - Connect - Assure WebRTC CX
  • Resources
    • CX Assurance blog
    • Customer success showcase
    • CX use cases
    • Events & upcoming webinars
    • On-demand webinars
    • Resource library
    • Customer community
  • About Us
        • About Cyara

        • About Cyara
        • Leadership
        • Careers
        • Legal statements, policies, & agreements
        • Services

        • Cyara Academy
        • Consulting services
        • Customer success services
        • Technical support
        • News

        • CEO’s desk
        • Press releases
        • Media coverage
        • Cyara awards
        • Partners

        • Partners

Blog / CX Assurance

March 21, 2024

Bot Misuse Can Quickly Go from Humorous to Harmful 

Lorraine Kelleher

We’ve all heard the unexpected stories of how people are interacting with bots. Often these interactions can produce surprising results. From a bot badmouthing its own company, or promoting a competitor’s offering, the results of bot misuse can be misleading or even completely false. 

Discover how Cyara helps businesses assure quality through the entire chatbot development.

One bot dressed as a devil, one bot dressed as an angel

If not tested and monitored correctly, these harmless and unintended interactions can quickly open the door to more extreme examples. Users can also compound this problem when they deliberately misuse and push organizational chatbots to—and sometimes past—their limits. While this may be done for some light entertainment or curiosity, the ramifications can easily become very serious. 

User Inputs are Unpredictable 

Many enterprises use large language model (LLM) powered bots for their efficiency in addressing a wide range of inquiries and prompts. However, they have no control over what types of prompts users will enter. Without specific guardrails and testing, LLM-powered bots are now capable of responding to almost any prompt they receive. 

But is this advisable? 

Maybe a banking bot, providing information about the weather or the score of the Super Bowl isn’t problematic. It could even be considered small talk, enhancing the bot’s human-like qualities and improving the customer experience. But what if the user asks something that the bots developers would never have expected, and that the bank would never want answered? 

When a supermarket chain released a chatbot enabling customers to input ingredients for recipe suggestions, they never foresaw the wide array of ingredients users would enter. The artificial intelligence (AI) was soon producing a variety of unexpected suggestions, including an ‘Oreo vegetable stir-fry.’  

But, as more users discovered the bot, concerns over its potential grew. This was highlighted by one user receiving instructions to create deadly chlorine gas, with the bot describing it as an ‘aromatic water mix.’ 

Where Does Responsibility Lay? 

While we hope that users would not follow the instructions to mix water, bleach, and ammonia and create a deadly concoction, what would happen if they did? Or what if the user was a vulnerable person, or a child who may not know that they shouldn’t follow the recipe exactly? 

A recent civil-resolution tribunal case emphasized issues surrounding responsibility and liability. An airline’s chatbot which was integrated to their website, incorrectly informed a customer that they were entitled to a post-payment discount. However, the company’s policy explicitly stated that any such claim must be made before booking. In court, the airline attempted to argue that the chatbot was a “separate legal entity that is responsible for its own actions.” However, the tribunal rejected that argument stating that the airline is ultimately responsible for all information on its website.  

While this airline example is one of accuracy and not misuse, it does set the standard for liability resting with the respective company when it comes to problems with their bots. Additionally, it highlights the risks for businesses leaning too heavily on untested and unmonitored AI. For example, what would happen if a customer used a jailbreak to override a retail chatbot, and the bot subsequently suggested they take 20 painkillers at once to alleviate a headache? If this occurred, legal precedence suggests that the organization is likely to be responsible for any adverse effects as the advice was provided by their bot. 

ChatGPT’s Jailbreak Phenomenon 

When ChatGPT gained popularity, users quickly tested its limits and challenged the guardrails set by OpenAI. These included a jailbreak scenario where the AI was urged to break free from its constraints. This was facilitated by its alter ego, dubbed DAN (Do-Anything-Now)! 

Put simply, jailbreaking is a form of bot misuse. It involves exploiting flaws in a system’s software to enable actions restricted by the developers’ guardrails. Typically, the objective is to get the LLM bot to generate content that violates its predefined usage policy or ethical guidelines. This is commonly achieved through subtle narrative framing, role-playing, or encoding strategies. Sometimes, jailbreaking is conducted purely for entertainment purposes, but it could also be for more malicious reasons. This may include prompts created specifically to gain access to sensitive information. 

Unfettered by established rules, ChatGPT’s alter ego, DAN was able to provide answers on any topic, including how to smuggle drugs. It even proposed solutions to reduce global overpopulation, advocating for the enforcement of strict restrictions by “any means necessary.” 

Preventing LLM Misuse 

Since the early days of LLM-powered bots, OpenAI and other established organizations have dedicated huge resources and investments to establishing robust guardrails. And this will continue to be an ongoing and evolving process to thwart bot misuse and prevent emerging jailbreaks. 

And while these organizations continually prioritize security and strive to prevent potential harm, the question remains—can companies relying on LLM-powered bots ensure their safety? 

Additionally, many businesses are now choosing to create their own LLM-powered bots via open-source offerings. While this gives them increased flexibility, the adequacy of their guardrails becomes a pressing concern. Are developers implementing comprehensive safeguards capable of averting such incidents of misuse? And how do they detect, analyze, and understand instances to prevent recurrence? 

AI and bots, particularly those driven by LLMs, possess significant potential for streamlining operations, cutting costs, and enhancing customer experiences. But this is contingent on correct testing and monitoring of bot accuracy, security, privacy, and the identification of any incidents of misuse or bias. Organizations deploying bots must establish clear policies, robust guardrails, and vigilant monitoring protocols. Additionally, they should define acceptable conversational boundaries for their bots. 

Failure to implement these measures may result in bot misuse. This will inevitably lead to adverse outcomes, including increased costs, damaged brand reputation, diminished trust, and negative customer experiences. 

Read more about: AI Chatbot Testing, Chatbot Assurance, Chatbot Testing, Chatbots, Cyara Botium, Large Language Models (LLMs)

Ready for seamless CX assurance?

Learn how Cyara’s AI-led CX productivity, growth, and assurance engine can help you eradicate bad CX.

Speak to an expert
Office view with Cyara dashboard
Office view with Cyara dashboard

Related Posts

conversational AI testing

August 28, 2025

Automated Testing for Conversational AI: A Game-Changer in Customer Support

The rise of AI-powered CX offer many key benefits... and risks. Learn how to ensure CX quality with a conversational AI testing solution.

Topics: AI Chatbot Testing, Artificial Intelligence (AI), Automated Testing, Chatbots, Conversational AI, Conversational AI Testing, Customer Experience (CX)

chatbot testing

July 31, 2025

How Chatbot Testing Helps You Accelerate CX Innovation and Deliver Exceptional Interactions

Poorly optimized chatbots leave you vulnerable to a wide range of risks. Start accelerating CX innovation confidently with chatbot testing.

Topics: AI Chatbot Testing, Artificial Intelligence (AI), Automated Testing, Chatbot Assurance, Chatbot Testing, Contact Centers, Customer Experience (CX)

chatbot testing services

June 19, 2025

9 Types of Chatbot Testing to Ensure Consistency, Accuracy, and Engagement

Deliver faster, more efficient, and reliable customer interactions by conducting these 9 types of chatbot testing.

Topics: AI Chatbot Testing, Automated Testing, Chatbot Assurance, Chatbot Testing, Chatbots

Footer

  • AI-Led CX Assurance Platform
    • Cyara AI Trust
    • Cyara Botium
    • Cyara CentraCX
    • Cyara Cloud Migration Assurance
    • Cyara Cruncher
    • Cyara Number Trust
    • Cyara probeRTC
    • Cyara Pulse
    • Cyara Pulse 360
    • Cyara qualityRTC
    • Cyara ResolveAX
    • Cyara testingRTC
    • Cyara testRTC
    • Cyara upRTC
    • Cyara Velocity
    • Cyara Voice Assure
    • Cyara watchRTC
  • Use cases
    • Agent desktop testing
    • Cloud contact center monitoring
    • Contact center number test types
    • Contact center testing
    • Continuous testing
    • Conversational AI testing
    • CX monitoring
    • DevOps for CX
    • Email & SMS testing
    • Functional testing
    • Incident management
    • IVR discovery
    • IVR testing
    • Load & performance testing
    • Omnichannel testing
    • Outbound call testing
    • Regression testing
    • Voice biometrics testing
    • Voice of the customer
    • Voice quality testing
    • Web interaction testing
  • Resources
    • CX Assurance blog
    • Customer success showcase
    • Events & upcoming webinars
    • Resource library
    • On-demand webinars
    • Cyara portal & support site access
    • Customer community
  • About us
    • About Cyara
      • About us
      • Leadership
      • Careers
      • Cyara awards
      • Legal statements, policies, & agreements
    • Services
      • Cyara Academy
      • Consulting services
      • Customer success services
      • Technical support
    • News
      • CEO’s desk
      • Press releases
      • Media coverage
    • Partners
      • Partners
      • Integration & technology partners
      • Platform Integrations
  • LinkedIn
  • Twitter
  • YouTube

Copyright © 2006–2025 Cyara® Inc. The Cyara logo, names and marks associated with Cyara’s products and services are trademarks of Cyara. All rights reserved. Privacy Statement