Imagine this: you’re sitting at your computer, trying to contact support for a technical issue and you’re met with a bot. You roll your eyes, here we go… let’s see if this bot understands my request… let’s see if this bot can even connect me with an agent. But not only does this bot understand your request, it seems to understand you on a deeper level, you feel like you are talking with a human support agent, or even chatting with a friend. Welcome to the world of bots powered by Large Language Models (LLMs)!
This is the rosy view of LLM-powered bots. Unfortunately, it’s become more common to see the risks rather than rewards of these bots being highlighted in the media with various embarrassing and sometimes funny chatbot failures. For example, a recent incident springs to mind, where DPD, a delivery service company’s customer service bot was coaxed into writing a poem about its poor customer service! Imagine your branded company bot, throwing you under the proverbial bus!
In the digital landscape, where technology evolves faster than you can say “Siri,” the idea of LLM-driven customer service bots is no longer a far-fetched idea. With the plethora of benefits, these bots are becoming more sophisticated by the day, capable of understanding context, nuance, and even emotion. And as the saying goes – with great linguistic prowess comes great responsibility, or something like that lol.
So, let’s take a closer look at my hopes and fears for LLM-powered bots.
Seriously Speedy Bot Development
The hope: Leveraging LLM and GenAI technologies in bots is primarily driven by the need for faster deployment. Using LLMs significantly reduces the effort involved in building bots compared to traditional conversational AI-based bots, leading to a shorter time-to-market. These factors make LLM-powered bots extremely cost effective, so what’s not to love?
The fear: The trade off is that you give away a lot of the control you once had with the traditional, reliable conversational AI and you introduce some serious uncertainties. LLMs are essentially black boxes, making it challenging to predict their outputs. As a result, with just a few clever prompts, a user was able to manipulate a bot into criticizing its own company – the potential for reputation damage here outweighs any associated cost savings.
Wonderfully Natural Interactions
The hope: We all prefer engaging in a proper conversation over the monotony of ‘Press 1 for sales’. Bots powered by LLMs, offer this through intuitive interfaces that cater to diverse users, regardless of their technical background. This simplifies interactions, broadens accessibility, and fosters wider adoption.
The fear: On the flipside, maintaining the accuracy and reliability of bot responses poses a significant challenge, particularly when addressing complex or sensitive inquiries. It’s possible for a LLM-powered bot to very confidently hallucinate incorrect and non-factual information.
A World of Possibility
The hope: LLMs are highly versatile and adaptable, capable of performing a wide range of language-related tasks, including translation, summarization, question answering, and content generation. Their flexibility enables application across various domains and industries.
The fear: In order to perform this wide range of impressive language-related tasks the LLM requires user input. Users don’t always know what data is safe and unsafe to share, they may share sensitive data with an LLM without fully considering the consequences.
If At First You Don’t Succeed
The hope: It’s fascinating that LLMs are designed to continuously learn and improve over time. By being exposed to new data and feedback loops, they can refine their language understanding capabilities, adapt to evolving linguistic patterns, and enhance their performance on specific tasks, ensuring ongoing relevance and effectiveness.
The fear: This also means that LLMs are susceptible to inheriting biases which may be present in the training data, potentially leading to biased responses or decisions. Imagine a company that uses an internal HR bot powered by an LLM to screen CVs before the interview process. Over time, it learns from interactions with candidates and HR professionals. If the training data used to develop the bot contains biases, such as gender or racial biases in past hiring decisions or job descriptions, the bot may perpetuate these biases in its interactions with candidates.
A New Frontier
LLM-powered bots are a double-edged sword, they offer faster speed, more natural interactions, and greater accessibility. However, they also carry the potential for hallucinations, misuse, security vulnerabilities, and negative biases. The emergence of LLM-powered bots is both an intriguing frontier and a potential minefield.
As we navigate this new world of AI-driven customer service, we need to tread carefully and recognise the pitfalls that lay ahead of us. Only through thoughtful consideration and responsible stewardship can we successfully harness the full power of LLMs to create beautifully transformative customer experiences, and avoid self-deprecating bots that could lead to serious long term trust, reputation and customer satisfaction damage.