• Skip to primary navigation
  • Skip to main content
  • Skip to footer
Cyara

Cyara

Cyara Customer Experience Assurance Platform

  • Login
  • Contact Us
  • Request a demo
  • Login
  • Contact us
  • Request a demo
  • Why Cyara
    • AI-Led CX Assurance Platform
    • AI vision for CX
    • Cyara partner network
    • Cyara Academy
  • Solutions
    • Transform
          • TRANSFORM – Drive CX Change

          • Functional, regression, & objective testing | Cyara Velocity
          • Performance testing | Cyara Cruncher
          • See all use cases >
          • Cyara platform - Transform - Drive CX change
    • Monitor
          • MONITOR – Assure CX Journeys

          • Telecom assurance | Cyara Voice Assure
          • CX & telecom monitoring | Cyara Pulse 360
          • Call ID line assurance | Cyara Number Trust
          • Agent environment assurance | Cyara ResolveAX
          • CX monitoring | Cyara Pulse
          • See all use cases >
          • Cyara platform - Monitor - Assure CX journeys
    • Optimize
          • OPTIMIZE — Leverage AI for CX

          • Conversational AI optimization | Cyara Botium
            • Functional & regression testing for AI agents
            • LLM-driven AI agent testing
            • Load testing for AI agents
            • NLP analytics for conversational AI in CX
          • Generative AI assurance | Cyara AI Trust
          • See all use cases >
          • Cyara platform - Optimize - Leverage AI for CX
    • Connect
          • CONNECT — Assure WebRTC CX

          • WebRTC optimization | Cyara testRTC
          • WebRTC monitoring | Cyara watchRTC
          • WebRTC quality assurance | Cyara qualityRTC
          • See all use cases >
          • Cyara platform - Connect - Assure WebRTC CX
  • Resources
    • CX Assurance blog
    • Customer success showcase
    • CX use cases
    • Events & upcoming webinars
    • On-demand webinars
    • Resource library
    • Customer community
  • About Us
        • About Cyara

        • About Cyara
        • Leadership
        • Careers
        • Legal statements, policies, & agreements
        • Services

        • Cyara Academy
        • Consulting services
        • Customer success services
        • Technical support
        • News

        • Press releases
        • Media coverage
        • Cyara awards
        • Partners

        • Partners

Blog / CX Assurance

February 2, 2022

How to Gain a Deeper Understanding of Your NLP Engine to Influence the Performance of Your Training Data

Alison Houston

Alison Houston, Data model analyst

This article was originally published on QBox’s blog, prior to Cyara’s acquisition of QBox. Learn more about Cyara + QBox.


Great strides have been made in the advancement of NLP systems, but chatbot trainers still face one fundamental challenge: how to get an NLP model to perform at its best. In this blog our own expert chatbot trainer Alison Houston shows you how.

Cyara empowers business to develop more effective chatbots with our conversational AI optimization solution.

Person scrolling on phone to interact with chatbot

Understanding NLP Working Principles

First things first: NLP doesn’t “read” and “understand” language and conversations in the same way humans have learnt to read and understand them. It’s easy for chatbot trainers to fall into the trap of believing that because an utterance makes sense to them, their model will understand it with clarity and identify the correct intent with confidence.

NLP engines such as Lex, Dialogflow, and Rasa need a qualitative approach to the training data.

You can imagine the way they work as transfer learning (this is a machine-learning method that takes a previously trained piece of information and reuses it as the basis for learning a similar piece of information).

Simply adding more and more training data to the model is not the best way to solve any weaknesses in chatbot performance. In fact, this is more likely to result in poorer performance. It will add too much diversity, it’ll overfit or even unbalance your model, and it’ll probably become ineffective as a result of being trained on too many examples.

Carefully curated training data is one of the key attributes of good performance. But more importantly, chatbot trainers need to understand what the learning value is of each utterance they add to their model.

The optimum number of utterances is very difficult to pinpoint, because it’ll depend on a number of factors such as other intents, their “subject closeness,” their number of utterances, and so on.

But as general guidance, a good starting point is 15 to 20 utterances—but start to be cautious when you reach the 50- or 60-utterance mark. We have an existing blog on utterance generation you may find useful here.

How Can You Influence NLP Performance?

Broadly speaking, there are two categories of NLP engine:

  1. The ones with maximum control, where you can tune almost all parameters, control where the data is, etc. These are great, but only hard-core data scientists and development teams will make the most of them. Such engines also require you to manage the tech stack, and do the upgrading, scaling, and hosting yourself. Rasa is one example of this category of engine.
  2. The ones for minimum investment, provided by the most renowned NLP providers, where you benefit from the latest and most innovative advancements and improvements in NLP. Your only influence on performance is your training data. This category of NLP engine includes LUIS, Lex, Watson, and others.

Whichever NLP engine you choose to use, your training data is key to unlocking performance.

So, you are inevitably going to wonder how to maximize the impact of your training data by asking questions such as:

  • Should I repeat this concept twice?
  • Is five times too many?
  • Would three times be the optimum amount to gain maximum learning power for my model?
  • How many concepts can I cover in one intent before the intent is deemed too wide?
  • How should my utterances be structured?
  • Should they be as short as possible—or longer, to cover more meaning? How much variance should I give to each utterance?

An experienced chatbot trainer will know the answers to all these questions if they have a true understanding of the influence and learning value their training data has on their model.

And to do this, they use techniques to measure those performances.

How Do You Measure the Quality of Your Training Data?

Your training data needs to be assessed and analyzed to measure its quality. So, techniques like preparing test data (also called cross-validation or blind data) are very efficient, but also time-consuming.

K-fold is not great when you build your model because the changes in the training data will create performance changes only due to the randomization element of the K-fold algorithm. Leave-one-out is another technique you may want to investigate.

Ultimately, you need to find a systematic way to measure your model.

Understanding the “ripple effect” is very important. The ripple effect is what happens when you modify some training data in an intent X, and you improve that intent, but the performance of other intents (A, D, F) also changes, sometimes for the better—but sometimes not.

The ripple effect is due to the fact that intent-classification models tend to rely on a set amount of training data per intent and this means that each piece of training data has more influence.

The diagrams below illustrate the ripple effect, and in particular, the positive and negative effect some changes can make.

In figure 1, intent 18 is struggling to perform well. It is confused with the training data of intents 10, 15 and 21. We can see that the training data (represented by dots) is spread out, indicating that the definition is not well understood.

Figure 1:

In figure 2, we reworked the training data and improved intent 18. We can see that the definition of that intent is narrower.

By improving intent 18, we’ve removed some confusions in intents 10, 15 and 21, even though we didn’t change their training data, so their performance has improved (a positive ripple effect).

However, if you look at intent 12, which did perform well in figure 1, it is now confused with intent 18—this is an example of a negative ripple effect.

Figure 2

These types of analysis are only possible with systematic testing. Finding a technique that works for you—leave-one-out, or test data, or a tool to help you—will dramatically improve your understanding of the performance of your model, and help you find weaknesses, analyze the reason for those weaknesses and validate your fixes. 

Read more about: Chatbots, Conversational AI, Conversational AI Testing, Natural language processing (NLP), QBox

Ready for seamless CX assurance?

Learn how Cyara’s AI-led CX productivity, growth, and assurance engine can help you eradicate bad CX.

Speak to an expert
Office view with Cyara dashboard

Related Posts

conversational AI governance

November 13, 2025

Conversational AI Governance for CX: Ensuring Compliance, Bias Mitigation & Reliability

AI-powered CX channels handle millions of interactions every day. Deliver accurate and reliable CX with conversational AI governance.

Topics: Artificial intelligence (AI), Conversational AI, Conversational AI Testing, Customer experience (CX)

AI CX dealbreakers

November 12, 2025

New Survey Data: The AI Dealbreakers Making Consumers Ghost

As customer expectations and Agentic AI technology evolves, you must avoid dealbreakers and deliver quality interactions for best results.

Topics: Agentic AI, Artificial intelligence (AI), Conversational AI, Conversational AI Testing, Customer experience (CX)

conversational AI testing

August 28, 2025

Automated Testing for Conversational AI: A Game-Changer in Customer Support

The rise of AI-powered CX offer many key benefits... and risks. Learn how to ensure CX quality with a conversational AI testing solution.

Topics: AI chatbot testing, Artificial intelligence (AI), Automated testing, Chatbots, Conversational AI, Conversational AI Testing, Customer experience (CX)

Footer

  • AI-Led CX Assurance Platform
    • Cyara AI Trust
    • Cyara Botium
      • Functional & regression testing for AI agents
      • LLM-driven AI agent testing
      • Load testing for AI agents
      • NLP analytics for conversational AI in CX
    • Cyara CentraCX
    • Cyara Cloud Migration Assurance
    • Cyara Cruncher
    • Cyara Number Trust
    • Cyara probeRTC
    • Cyara Pulse 360
    • Cyara Pulse
    • Cyara qualityRTC
    • Cyara ResolveAX
    • Cyara testingRTC
    • Cyara testRTC
    • Cyara upRTC
    • Cyara Velocity
    • Cyara Voice Assure
    • Cyara watchRTC
  • Use cases
    • Agent desktop testing
    • Cloud contact center monitoring
    • Contact center number test types
    • Contact center testing
    • Continuous testing
    • Conversational AI testing
    • CX monitoring
    • DevOps for CX
    • Email & SMS testing
    • Functional testing
    • Incident management
    • IVR discovery
    • IVR testing
    • Load & performance testing
    • Omnichannel testing
    • Outbound call testing
    • Regression testing
    • Voice biometrics testing
    • Voice of the customer
    • Voice quality testing
    • Web interaction testing
  • Resources
    • CX Assurance blog
    • Customer success showcase
    • Events & upcoming webinars
    • Resource library
    • On-demand webinars
    • Cyara portal & support site access
    • Customer community
  • About us
    • About Cyara
      • About us
      • Leadership
      • Careers
      • Cyara awards
      • Legal statements, policies, & agreements
    • Services
      • Cyara Academy
      • Consulting services
      • Customer success services
      • Technical support
    • News
      • Press releases
      • Media coverage
    • Partners
      • Partners
      • Integration & technology partners
      • Platform Integrations
Cyara
  • LinkedIn
  • Twitter
  • YouTube

Copyright © 2006–2026 Cyara® Inc. The Cyara logo, names and marks associated with Cyara’s products and services are trademarks of Cyara. All rights reserved. Privacy Statement