When you’re shopping around for a platform that will power your next (or first!) conversational app, there’s a whole raft of essential features and capabilities to add to your shopping list—along with some more advanced features that you might not consider in the early stages.
Top 4 Standard Features
As chatbot technology has matured, most vendors offer a standard feature set:
Integration with existing systems like CRM, marketing, and customer support systems, so activity can be tracked across the organization.
Pre-trained models so an app readily understands terminology and concepts specific to a particular brand or industry.
Quick deployment with a bit of code so the question-answering system can be used on a website, in an app, in a shopping cart, or on a customer support portal.
Rules-based flows for common questions and tasks that use decision trees to route inquiries to the right parts of the company.
The Case for Middleware
As more business units inside an organization experiment with conversational apps, teams bring in their own vendors. After a while, CTOs will notice that the functionality of many of these vendors overlap. Most organizations will want to standardize on a common platform for app development across the company for faster, more efficient solution creation. These middleware platforms that sit between the end user interface and the backend documents minimize disruption to existing systems while making them more intelligent, useful and valuable. Middleware allows teams to reuse code and workflows for deeper orchestration between conversational applications and the other parts of the tech stack. Most organizations have already made careful decisions about front-end UI frameworks and backend systems. A middleware platform approach keeps those decisions intact, helping everyone get the most out of existing investments and share the value of conversational applications.
Don’t Forget: The Cold Start Problem
You’ve deployed your chabot or conversational app to production. But it’s never answered real questions from real users before. It doesn’t have any behavioral data to go on. It’s like someone walking up to you on the street, demanding to know, “What’s my favorite ice cream flavor?!” You can’t even begin to answer that question since you don’t even know anything about them.
This is called the cold start problem. When systems are built to feel knowledgeable as they answer questions or make recommendations, they need knowledge. Deep learning capabilities will help the system pearn, but that takes time. On day one, the app won’t have enough information about the user to make relevant recommendations. You risk a bad experience and the app falling on its face out of the gate.
One way to minimize this “sprint to wisdom” without ready-to-go FAQs is to train the bot on publicly available content related to the areas where you want it to sound competent. There will always be some question-and-answer pairs that can be surfaced to VAs or chatbots to help them jump-start their learning. When there is no existing content to surface question and answer pairs, online datasets can be used as a base for training AI models. As users interact with the system, AI models self-educate to provide the most relevant answers.
Be sure your conversational apps vendor has a well thought-out approach on how to avoid the cold start problem so that you can launch your applications with confidence and give users a great experience from the first minute.
The Top 4 Critical Features
Along with the ability to sidestep the cold start problem and the standard, commodified features listed above, there are five advanced features that you’ll want to prioritize as you shop for a conversational app platform:
Robust ML and deep learning capabilities are a requirement for building beyond the traditional chatbot interface and enabling your application to understand natural human language. Prebuilt ML models for semantic search use mathematical logic to match the similarity of a question–which can be asked in many different ways–to the most relevant answer. These, and other out-of-the-box ML models, help interpret user intent so the app can provide helpful answers with or without existing FAQ data. This speeds up time-to-value by delivering more relevant, personalized answers to user questions.
APIs and standard integrations are also crucial for connecting to existing systems: chatbots, virtual assistants, voice services, and knowledge bases. An SDK for connecting to any data source ensures that your team can incorporate all relevant knowledge base content into your application. And a pluggable framework for AI enables you to import your custom ML models into the platform.
An operational pipeline architecture enables machine learning algorithms to be applied to data and documents when they come in, to understand what they contain. Then AI is applied again when the user asks the question, to predict the intent of the question and match it to the best answer. Deep learning watches those data-to-question-to-response feedback loops to continuously optimize results and improve the overall experience.
A low-code user interface ensures that it isn’t only advanced engineers and data scientists that can train, tune, and implement ML models. Admins and business owners should also be able to look under the hood so they can know when the system needs better data or fine tuning by the experts who understand the business (this is usually called “explainable AI”). By making machine learning an everyday part of your tech stack you’ll see faster time-to-value and self-service model training.
By expanding your commitment to conversational apps beyond the standard feature set to include advanced ML and deep learning technology, you can be certain that your organization will be able build powerful apps to delight users. Also, make sure you consider a middleware approach to standardize app development and deployment so you can eliminate excess vendors, cut costs, and speed time-to-value.