Help Your Chatbot Respond to Humans

Learn how to improve customer interactions with your chatbot. Lucidworks will demo a solution that helps your chatbot understand natural language queries and respond with the most relevant results.

Intended Audience

Technology and business leaders interested in improving CX with better chatbots.

Attendee Takeaway

Learn how Lucidworks can help make your chatbot “more intelligent” and provide better responses to customers.

Speaker
Steven Mierop, Senior Sales Engineer, Lucidworks


[Steve Mierop]

Hi, I’m Steve Mierop a Senior Solutions Engineer here at Lucidworks. And today we’re going to talk about how we can improve the quality and responsiveness to your chat bot. 

The goals of today’s presentations are going to be to understand semantic vector search at a high level and how it can be used in conjunction with the chat bot. We’ll look at a practical application of how it might be implemented, and then look at a live example. 

So what is symmetric vector search and how can it be used to augment the chat bot experience? Well, semantic vector searches start really uses advanced deep learning techniques to understand user intention in a more powerful and more relevant way. And what this does is it allows you to ask natural language questions and get an accurate answer back. 

So we use this not just with chat bots, but with enterprise search. We also use it with e-commerce to help eliminate zero search results. And really because we can better understand users and what their true intentions are and what they’re looking for this helps eliminate as much traffic to call centers by empowering users to find an answer to a question on their own. One final thing it also does is it helps alleviate the amount of conditional logic that you might need inside of a workflow engine. A chat bot workflow engine, rather. So we’re going to be looking at kind of all of these examples today. 

So what is semantic vector search? In short, super high level, it’s a set of techniques for finding information or products by meaning as opposed to lexical search, which finds documents by matching words and their variants. So if you’re familiar with how the storage mechanism of a traditional search engine works, what typically happens as we ingest documents, the words in those documents get tokenized ran through a series of analyzers or handlers, and ultimately stored in an inverted index, which is similar in concept to an index in the back of a book. 

On the other side, we have dense specter search. Dense specter search will actually store words and phrases and these concepts by their relationship to each other. So for example, horse and animal might sit more closely together in end dimensional vector space, as opposed to another group of objects or entities like New York, Paris or Beijing. And there’s a really nice visualization that I like to go to a lot. It’s project.tensorflow.org. And you can certainly go to this on your own. 

And this is a really simple way to view in this case, dimensional vector space and looking it in two dimensions. So, you know, a lot of the information is you can’t see it, but a machine can interpret it and see it. So let’s do a quick search for something like DNA. And what you’ll see here is that when we do that query for DNA, we’re going to see other concepts that are sitting by it. And other words that are related like “genome”, “tissue chromosome”, for example, These are going to sit more closely together in this n dimensional hyperspace. 

Whereas if we did a search for say, “Chicago”, you’ll see that it sits in a different section of hyper space, of this n dimensional space and different words are closer to it like “Boston”, “Toronto”, “Jackson”, “Illinois”, which is pretty neat. So it’s a different storage mechanism and it’s a different way to encode meaning of what someone might be searching for. 

So how exactly can this help a chat bot? And to answer that question, it’s really where you use it. So with the traditional or more of these modern, I should say, chat bot engines, they’re essentially big workflows or state machines, you could think about them. Where you as an administrator, as you’re building this out, you would define a series of intents. And this is using Google dialog flows terminology here. And for each intent you would map a series of what they call utterances or incoming queries from a user. 

So if someone says hello or hi, or hey there, that would ultimately map to the welcome intent. And then it would follow a series of workflow steps, sending a response, maybe asking for more information. So let’s say this is a hotel or hospitality workflow example. Someone might come in and say, when will my room be ready? Well that utterance will be mapped to an intent. And that intent would then handle the rest of that workflow. 

So it might say, Hey, I’ll need some personal information to identify you. Maybe the account you used or your phone number you used to book a hotel room. And then from there that input parameter could then trigger a fulfillment request. Most likely a web service call that would call a database or search engine, find that information for that particular user’s room status, and then send it back. So this is all well and good. It works great if you know all of the potential utterances and all of the potential intents that someone may ask. 

And that’s where things can get a little tricky because it’s hard to predict ahead of time, all the different combinations of ways to ask a similar question. And oftentimes in a lot of these modern platforms, there’ll be a fallback intent and this is pretty much the catch all. So it’s where someone asks a question or they use an utterance that I didn’t define ahead of time. And the workflow engine doesn’t know what to do with it. 

So what’ll happen is it’ll trigger a fallback intent. And then there’s a number of things that you could do to handle that. You might say, Hey, here’s a number for a help desk. You can call them and get more information, or here’s a link to our FAQ section. You could check it out, maybe find your answer there. And it’s okay. It’s not a great user experience. So where semantic vector-based search comes in is, this is a more intelligent way to help map an incoming question or query to what we actually have inside of our search index which is ultimately in a vector representation. 

So how it works is we typically would take, in this case, we’re using a pre-trained model, a model that’s been trained across public available information across the internet for how people search and the different variations of the questions they ask. And what you could end up doing is, you can encode that incoming query and then send that to a vector based storage approach where you could do the information retrieval and actually see how closely that incoming query sits to content that you have inside of your search engine. 

So let’s walk through a live example. And before we do that, I’ll just show you, this is Google dialogue flow that we’ll be using. I just took a quick screen capture of it. And one thing to notice is that very few intents are used for this. So I have a welcome intent when someone says, hi, hello there. I have a fallback intent, which is going to be encoding that query with our vector based model, and then doing that kind of nearest neighbor or proximity search inside of our vector based storage layer. And then I have a status intent. That’s just anticipating if someone were to ask, when will my room be ready by, but, you know, even with just a few of these intents defined, what I want to show you is that we can ask a lot of different types of questions that I did not anticipate. And I didn’t put the work in to configure a particular intent for. 

So for this, let’s move over to the demo, and this is going to be a hospitality hotel type of experience, where we’re going to be interfacing with a fictitious company called Hotel California. And I’m going to walk you through the first part of that workflow. So let’s start by saying, hi, that triggered our welcome intent.

[Chatbox] 

Welcome to the Hotel California. How may I help you today?

[Steve Mierop]

And again, that was a pre written by me. So that was part of our anticipated response. And then let’s say, when will my room be ready? And again,

[Chatbox]

You got it. Please enter your phone number and I’ll check on your room status.

[Steve Mierop]

This utterance triggered the room status intent, and now it’s asking for some identifiable information. In this case, we configured it to use a phone number. And from there that phone number was fulfilled. Probably a web service call went to a database or a search engine, looked up the status of my room and then delivered it back.

[Chatbox]

Your room will be ready by 4:00 PM today. Can I answer any questions about the hotel for you?

[Steve Mierop]

Now, here’s where Symantec vector search can really help. It’s in these fallback scenarios, where it says, can I answer any questions about the hotel for you? Well, you know, maybe I can’t anticipate what’s going to be asked, but I know I have a lot of attributes about the hotel, characteristics, I have frequently asked questions, I think I have about a hundred or so inside of our semantic based vector storage layer. So all of those have been encoded, and I’m just going to let the fallback strategy do the look up for me. So I might ask, is there a fitness center in the hotel? And we’ll get a response back

[Chatbox]

Fitness center with 40 plus pieces of new life fitness equipment and spa are included. Both have accessibility options available.

[Steve Mierop]

Now, this one is pretty easy, right? I said, fitness centers here. We really don’t even need semantic vector surgery for that. That could just be a keyword driven approach, but where semantic vector search really shines and where the value is, is what if someone asks this question in a different way, and they’re all going to be looking for this answer? What if they came in and said, can I work out here? 

Now, in that case kind of just like our n dimensional vector space model we’ve looked at before well work out and fitness center, probably sit pretty close to each other. And then we could use that as a way of establishing what is the nearest or closest or most likely answer to a question. 

So, and again, I don’t have the word workout in my index. I don’t have it. No, really. There’s no rewrites. There’s no synonyms to find for it, it’s just all coming from that pre-trained generally trained vector model. So let’s try a couple more examples here, maybe instead of, can I work out here? I might ask, you know, something a little verbose. Is there a place to do ab crunches in the morning. And again, conceptually the same, you know, Hey, there’s a fitness center. 

This is probably what you’re looking for. And I could use fitness center. I could use workout. I could use ab crunches. I could say something like, you know, where can I do bench presses, maybe there’s a nearby gym, or, hey, maybe there’s a fitness center in the hotel that I could use. So, you know, handling these various permutations of how someone’s going to ask the same question is really where semantic vector surge can pull its weight and help you solve this kind of zero search results type of problem. 

And this is a better experience too, because now I don’t have to redirect them to a help desk or somewhere else. I can empower that user to answer their own questions, just using a pre-trained model. So, and this could also handle things like misspellings auto vocabulary words, like we’ve seen here, maybe a more common question might be, can I bring my dog with me? And then we’ll get a response.

[Chatbox]

Pets such as dogs and cats are not allowed at this property.

[Steve Mierop]

But what if someone came in and said, Can my service animal stay in the room? Now there might very well, you know, we could have created a new question dedicated for service animals, but let’s say that I didn’t, you know, it was still smart enough to know that service animal is related to pets, such as dogs and cats. Or if I said, can I bring my German Shepherd with, you know, and we’ll still land on that pet such as dogs and cats, and we’ll get the appropriate response back without me having to predefine German Shepherd, without me having to anticipate this type of incoming utterance or question from a particular user. 

So really powerful stuff, and really neat use cases that you could use to kind of close those gaps with things, I just where you can anticipate what could be asked. So I just wanted to leave you with this one other thought, too, it’s that putting the semantic vector search engine kind of at the center of an organization? 

Well, there’s some value in doing that. One is that you could keep your chat bot frameworks a little bit more lightweight and have more predictable, consistent results across these various user interfaces. And it’s not uncommon for larger enterprises, maybe to have one department using one type of chat bot, and then maybe another department with their own resources and their own teams, they may want to build their own chat bot with a different framework. 

But with this approach, you could still feed from the exact same knowledge base that’s been encoded and that’s living in n dimensional vector space. You could also serve third party apps, intranet portals, E-commerce. A really big use case is eliminating zero search results. In case someone submits a query about a brand that I don’t carry, but I could still map it to a brand that I do carry and then present that to the user. So a lot of value of using a centralized storage mechanism for dense vector search and information retrieval. 

So, okay. Thank you for joining me today. And any questions, feel free to submit an email or check out our website. So thanks again. Take care.

Play Video