Optimizing Content Once for Search, Chatbot, and Voice

Learn how Lucidworks Fusion and Search Relevance are helping to leverage the same content for automating search and chat for client-facing and financial advisor-facing channels at Morgan Stanley.

Speaker:
Dipendra Malhotra, Head of Analytics, Intelligence and Data Technology for Wealth Management, Morgan Stanley

Learn more about how Morgan Stanley uses Fusion.


Transcript

Dipenda: Hello, everyone, welcome to the Activate Conference. My name is Dipenda Malhotra, I’m Head of Analytics, Intelligence and Data Technologies for Wealth Management at Morgan Stanley.

Before we jump into our session, I just wanna give you a little bit of background on what the charter of my organization is. We are primarily responsible for providing information and analytics as well as intelligence services and AI and machine learning services to the rest of the organization. In doing so, we have to focus on how to index the organization properly, how to provide the information to the correct channel at the right moment, and also provide the relevant information based on the intent of what is being asked.

As a result, it became very important for us to focus on the content once, because that’s where the human capital is utilized in creating the content and annotating the content and conversationalizing the content. The rest of the things, whether that content is rendered on search, on Chatbot and Voice and FAQs, it doesn’t matter at that point because all those channels could be automated.

As a result where the human capital was being used, that was our focus.

Throughout the presentation, you will see how Fusion and the indexing as well as the machine learning capabilities within that platform have helped us.

Prior to jumping, let me give you a little bit of background or context of what Morgan Stanley Wealth Management does.

We are a full service wealth management firm providing financial services, providing advisory services, brokerage transactional support, as well as banking and lending services. We have about 15,000 financial advisors who service 3 million clients primarily in the United States and manage $2.6 trillion of assets of those 3 million clients.

Before jumping in one more thing, when I say that we had to service content and there are three constituents we have, there are the clients who could ask a question about:

  • How do you open an account?
  • How do you deposit a check?

It could be a financial advisor or a branch support staff, but could ask the same question. It could be the service agent in the contact center, which could also ask the same question. They might ask in different fashions. Clients may go into FAQ or may search. A financial advisor may want to chat with the service agent. The service agent may have to search all of the contents in our knowledge management tool, and then get the information out.

As you could see the information, which has to be rendered is the same, which is the content at the bottom of the slide. However, the channels and the way we have to render that information obviously is different. It could be FAQs, it could be search, it could be Chatbot, it could be Voice or any other medium, but that content is the one where most of the human capital is invested and we need to share it as much as possible.

How it’s historically, or generally done and anywhere in the industry is you have FAQs. Because the indexing, the synonyms, the business rules of how the hierarchy gets generated in the FAQ is different. Normally content gets catered to that. Whoever is creating the content is putting in the synonyms, is putting in the business rules and is generating the content with FAQs in mind.

Then when the time comes to search where the tagging is different, the learning machine is learning differently, the content gets created differently or settled differently. Similarly, if you’re doing Chatbots, it’s more on conversations. As a result, annotation and knowledge engineering happens, and then content is generated again. Then for Voice, the same thing happens through IVR and the conversation on Voice.

The challenge with this is the most expensive in this whole chart is the content field at the bottom, because that’s where you’re gonna have your SMEs, your subject matter experts, creating that content, thinking about the questions, thinking about the answers, thinking about the search, thinking about what needs to be rendered on the search and becomes the most expensive piece. Everything else can be done very quickly and through the machine.

I’ll focus primarily on the content. How we started doing it towards the beginning of this year, was we started about creating the content once. However, when the person created the content, they created the synonyms and the tags and the rules for the hierarchy at that point.

Past that point, we started leveraging a Chatbot platform, a ML/AI platform, and also Lucidworks Fusion, but to put all this content into Fusion and then doing all of the knowledge engineering, all of the annotations and conversationalization using the Chatbot platform and providing the rest of the information which was needed into this whole platform.

As a result, indexing got done and the recommendations and the intent and the content got done as well and the direction and the redirection of the same content based on the intent of the question regardless of how it’s asked, also got done in the same platform by a person once. And then it could be rendered through all the different channels that you see on the top.

Let’s look a little bit deeper into how this was done. If you think about it from a content management perspective or knowledge management perspective, we have five phases.

  • You have to create, where initially you’re going to create the business rules.
  • You’re going to create the content.
  • You’re going to think about all the questions which are going to be asked,
    and then include the intent, the way those questions are going to be asked. That is mostly done through a human.
  • There is a little bit of machinery in how you are organizing the content, how you’re tagging the content. A knowledge management tool like EM could help you out with that.

But past that point, nothing else could be done with the machine. That’s where most of the human capital gets deployed.

Past that point, now you do knowledge engineering. In this case, this is where you’re going to create the taxonomy. You’re going to then create annotations on it. You’re going to create a knowledge graph to see how the information flows. Then through an iterative process, you’re going to try to refine all the things.

This is where human and machine learning comes into play.
You understand the user intent from past questions, you understand the user intent from the path they took to come to that question, you understand the user intent from the products they hold to get to the point where you can understand the question. Then if the answers are indexed properly, those questions can be rendered.

However, once the knowledge engineering is done, you have to then conversationalize. To see the flow through which you will be asking the right questions to ultimately get down to the content.

Then obviously recommendation. That’s where the interaction with the actual user normally takes place.

Then once this is done, you iterate, the machine learns, you refine your taxonomy, you refine your annotation techniques, you refine your knowledge engineering techniques and the algorithms you’d use there to continue to get better and better at this.

Let’s take an example on how this can be thought through. Let’s take an example of opening an account. The user could just write, ask a question or search by saying, “I want to open an account”. In the FAQs, you have to expand the link from the user has to sift through the FAQs to get to an account, and then go into the text, which then explains how to open an account or as a link to the document.

Similarly, if it’s a search, the user may say, “I want to open an account.” They may say, “I want to create an account” or “how to get to an account.” Those things need to be then in a search, the user will get all the things which are tagged to the account, tagged to the intent of open.

Open as the intent and account as an entity, and you provide a list of different results sets to the user, and then the user uses their knowledge. Users then go and select the one which is the results set, which is most relevant to them. At that point, users can then go read the document and go ahead and create an account on their own.

Then if it’s a Chatbot, it’s more of a conversation. When the user says, I want to open an account, then you have to ask what type of account it is. Then the user would provide the type of the account. Then from there on the dialogue could go.

You could see the information of creating an account could be created once. But the conversationalization of that, to render that information, the various tagging of rendering that information on search or in a hierarchy could be different.

That’s where you could leverage the indexing and the machine learning and NLP capabilities, which are provided these days with Fusion to get those things to a manageable scope.

Let’s try to answer this question in different steps so we can understand how this thing goes.
The first thing is to understand what the question is. In this case, the user said “I”. You have to know who the user is, whether they are authenticated or whether they need authentication. It could be once they’re authenticated, you know whether it’s an F and that conversation might be different.

Is it a brand support staff, which could be a different conversation. If it’s a client, again, it’s a different conversation. Then if it’s a service agent, they might be looking for different information because they might be supporting one of the others like an FA, CSA, client who have now gotten into an issue and are trying to solve that issue.

All those things become relevant as you’re trying to provide the results sets for the search and your ranking of the results sets might be different based on the user.

  • Similarly, what channel is being used?
  • That will help you start to go down the path of do you have to render a search?
  • Do you have to render a conversation?
  • Is it with Voice, and is it still a conversation?
  • Is it a priority search in the Voice, and then whether it’s FAQ?

Then you need to understand all of the proper words. The user wants to do as open and open, and second that’s the intent to open or create, and then the account, which is the entity, which they want to do.

Next, you have to determine how the intent and entity would play a role in this. In this case, what we know is the user who is now authenticated and is allowed to open an account, wants to open an account. We’ve ascertained that. We know that the intent is valid because it’s open and we know that the prompt is an account, but there are multiple different types of accounts.

We need to dig a little bit deeper in that, and probably ask another question on what type of an account? Then we have to ascertain, do we have a response for this particular question in the given channel? Once we have that, then we go find the relevant content, which is all indexed in one place and Fusion.

Past that point, we start to render that content. When the question was what type of an account we can then go back and say, what kind of an account do you want to open? They say IRA, we go through the same cycle and then provide the results set on how to open up an IRA account. Past that it’s just search and search for that content within Fusion, where we store all of the answers and basically deliver the content on the channel, where it was asked.

We accomplish this at a high level, or we could have multiple different users. Users could be allowed to choose a search, users could be allowed to have Chatbot or Voice agent.

But what we have is the content at the bottom. Using various different techniques, we create the ontology, create the documents, the tags, the intent, all the links to various different how-to videos, images, any questions which could be created, and then all of the answers. Then we load all of that onto Lucidworks Fusion, where all of these are indexed plus more and more learning could be done within that platform.

We have a Chatbot platform, which can very quickly help us test what the results send and the conversation is. As a result, we then render all that conversation on there.

With that, it pretty much completes my presentation

Play Video