Ecommerce Product Discovery as Conversation: Improving Experience Through Signals

One way to think of ecommerce product discovery is as a conversation between a shopper and the product discovery system. Signals are the transcript of the conversation, but the transcript often omits potentially useful information. If we look at a more faithful, more complete, transcription of the conversation we open up opportunities to increase the shopper’s trust in the system and to create a shorter path to achieving their goal. We’ll review some of these opportunities and the types of signals that facilitate them.

Intended Audience

Merchandisers, Search Developers and Architects, Data Scientists, Ecommerce business leaders

Attendee Takeaway

Learn how signals other than the typical impression/click/cart/purchase events can be used to create a more engaging product discovery experience.

Speaker

Eric Redman, Principal Search Architect, Digital Commerce, Lucidworks


[Eric Redman]

Hi, my name is Eric Redman and today I’m going to talk about product discovery signals, and we’ll start with remembering a little bit of signaling theory and then I promise we’ll get into some ideas for practical applications that maybe we wouldn’t think about if we didn’t return to the theory and remember the grand scope of signaling theory. 

And signaling theory is why we use that word signals. We didn’t just make it up in e-commerce. 

So let me start with this idea of information asymmetry. The idea of information asymmetry and signaling theory is that that’s the motivator for signaling, one party signals another because the two parties don’t share the same information. So for example, in e-commerce, the shopper knows more about what their goal is, what they’re trying to accomplish than the retailer does, and the retailer knows more about their product catalog and the quality of their products and what other shoppers have done than the shopper does. 

So there’s this information asymmetry and they signal each other, goes to theory to try to account for this information asymmetry so that they can help each other in some way. In this photo here, we have this green orchid bee and it turns out there’s a lot of green orchid bees and they’re actually different species. And the way that they can tell each other apart is that one type of bee will tend to one type of orchid. And so they’ll collect the scent from that orchid in sacs on their hind legs, and then express that scent during mating rituals and so on so that other bees can overcome that information asymmetry like, is this some other type of bee or is it the same as I am? 

And expressing that scent is the signal that gets them over that hurdle of asymmetry. And this idea of information, sorry, signaling theory and information asymmetry, you’ll see this turning up all across different disciplines, like evolutionary biology, we’re kind of just touching on that, economics and so on. 

Another concept that comes up in signaling theory is the idea of costly signals. So a theory within a theory, if you will, or related theories. 

So the costly signals theory is that, when a signal is difficult to produce that is, it’s costly, it’s even more costly for a dishonest signaler. So the theory is that, the receivers of signals will tend to trust costly signals more than they do signals that are easy to produce. And in e-commerce, we’ll talk about this a little bit. There are definitely varying costs of producing signals and specifically I’m gonna touch on shoppers that produce costly signals. 

All right, we’ve talked about signaling theory and the idea that it’s motivated by information asymmetry and that there are costly signals that tend to engender trust. And now I wanna remind that signals occur in an environment, they’re not in isolation. And I think too often we think about a signal in isolation, such as, what does this individual signal or a piece of data mean? How can we act on it? 

But actually we need to consider all of the signals happening in the environment in which they’re happening in order to really make the best sense of what we can learn from the signals. So let’s take that to the e-commerce environment and now get into some actionable insights, things that we can actually do with these ideas. 

So in the e-commerce environment, we make this mistake of trying to interpret an individual signal. And one example is what weight should we assign to a cart? A lot of times we have some input to a model and we’re assigning weights to clicks and carts and purchases, but I’m arguing that it’s not that simple. One cart might deserve more weight than another cart, because what happens if the product that was carted and then purchased was later returned, or it was simply removed from the cart. What if the shopper has produced this product previously, maybe that should have a different weight than the first time someone purchased that product. What else is in the cart when we added that product to cart?

And what’s the position that the product was found in some search result was before it was carted. Maybe there’s something there we should be paying attention to. Maybe that’s a costlier signal when someone has to scroll down and find something before they cart it. 

So let’s talk about tenacious shoppers. Don’t try to look this up, I am making this label up here. So a tenacious shopper, what do I mean by that? I’ll get into this in the next slide in more detail, but that that’s the shopper that doesn’t give up. They try really hard. And so what can we do with the costly signals that these tenacious shoppers generate? Well, one thing is maybe train better semantic models. I won’t say maybe, we definitely can improve the training of semantic models such as the model that we use in Never Null at Lucidworks. We could potentially improve recall, or even augment a product classification because these tenacious shoppers are telling us something. So what are they telling us.

When a shopper, like a tenacious shopper searches and they don’t see what they like. They abandoned the search, they abandoned the browse, but they don’t give up, they keep trying. And ultimately they cart something, they purchase something. So how can we tell that this is a costly signal of relevance?

Well, because if we take these signals in aggregate, that is, let’s just take the example of abandoning the search, if we take all of the shoppers that abandoned a search, and then maybe we have some other rules about how much time goes by and types of things that happen in between abandoning the search and then carting some product. If we take all of those signals in aggregate, those are like votes for maybe what should have been there when they searched or votes for relevance in other words. 

So here’s an example. Let’s say I search for casual sneakers and I look at this result one product and yuck, I don’t like it, but I’m a tenacious shopper, I’m not giving up on this retailer. Maybe I’m loyal for some reason or maybe I’ve heard about something they have and I’ve figured that I’m just not looking the right way here. So I try different search. I relax my search and I just searched for just sneakers. 

And now I see, okay, I see one that I might be interested in and I go ahead and cart that one. So this signal in aggregate, let’s say that we find that 100 shoppers have done this over a period of a month. That’s a pretty strong signal and a costly signal because those hundred searchers had to try pretty hard to get to this point, do extra work. So that’s a costly signal indicating that this particular pair of shoes, this product, according to these users should be classified as a casual sneaker. 

And so we might wanna take that into account when we are training models for ranking, but we could also look at maybe this can help us classify products based on these costly signals from shoppers. 

So what are the takeaways? So tenacious shopper pattern can be identified in logs. We can see these patterns and they can be more complex than the simple example that I just gave. And the signals in aggregate are like votes for relevance, votes for relevance, given some context, like a search query or a browse category, whatever. So some potential uses of these signals, these things I’m exploring right now. And that is, I mentioned this, improve the training of semantic models. 

So looking into how to weight these tenacious shopper signals, give them a little more weight and nudge things in that direction. Also improving search and browse recall, perhaps we can add products on the fly based on these signals as we learn from these tenacious shoppers to search results in browse categories. And then that’s connected to, we could use this behind the scenes to augment catalogs or augment the classification of products based on these types of signals. 

Okay, counterfactuals. Interesting word, it sounds like, oh, I’m gonna have to spend some time learning about that, but actually counterfactuals simply means, considering the thing that didn’t happen. So trying to make predictions about what would the world be like if this thing that didn’t happen had happened? So the motivation here is we wanna use logs of signals from some existing models. So your signals from however your product discovery system is working today. We wanna train a new and improved model. but we wanna find a way that we can train this new model without impacting current shoppers. So we’re not just trying new models. 

And there’s a problem with the training, a new model from an old model and you’ll tend to just revert to the behavior of the old model, unless you address the problems that I’ll talk about in the next slide. And a third motivation, we’re always looking for ways to mitigate the cold start problem or the cold start bias against new products. They don’t have signals, how can we deal with that? 

So counterfactuals, and here I’m talking about a principle called Counterfactual Risk Minimization or CRM. The image here of a book that’s about counterfactual imagination in history, just reminding that this is not a highfalutin concept, some of the algorithms, as usual are a little dense to understand at first, but the idea is how would something have performed if it had been in place instead of the current ranking system? 

So again, we wanna train on the logs from the existing system, but the problem with that is these logs of the existing system reflect the bias of the current model. So you have maybe a model that isn’t addressing position bias or other types of biases. So how can we train a new model on those signals while addressing the problem of bias, especially position bias. 

And also the fact that the logs of the current system can only tell us about propensities having to do with what the current system showed or how the current system behaved. And so that problem is a partial information problem, and it causes all kinds of headaches. It’s also related to selection bias. 

So let’s take an example. I browse for men shirts, and because I’m a tenacious shopper, I find my way to page four and I see this shirt, let’s just say that I’m attracted to, or, I’m thinking about adding this cart, and it’s new. So maybe that’s why it’s on page four. Now, how can we help this product with its cold start problem? How many people are going to be tenacious and then make their way to page four and notice this shirt? 

Well, one idea is that we have a similar shirt on page one that does have signals, and bear with me here, I know the shirt isn’t a great example of similarity, the price is quite a bit different, but let’s just pretend that it’s very similar. Then if we can identify that these two products are similar, then we can let’s say, give a boost an appropriate boost to that new product based on some weighting of the signals that are associated with this similar product here. 

So maybe we use a semantic similarity having to do with the description of the shirts, the textual description, or maybe a visual similarity, or maybe both, and we can use the signals from the similar product that is observed often, or is represented on the first page often. And perhaps we weight those so that we’re discounting those signals a little bit, but that’s a way to give a start to that shirt so that it’s just not buried on page four forever. 

So takeaways for counterfactuals, we have this specific idea or principle called Counterfactual Risk Minimization. It has to do with training new models while dealing with debiasing, if that’s a word, addressing the bias of the existing model. And in order to accomplish that, there are some techniques in counterfactual risk minimization. One of those being that we need to have some concept of the probability or the propensity of a product to be observed given how that product is being ranked and responses. So it could be that the existing model, and this happens all the time, doesn’t really pay attention to that. It doesn’t pay attention to figuring out what that bias is in the system. 

And so estimating position bias in an existing logging system is an important part of CRM. Well, we can sort take that concept out. We don’t have to follow all of the elements of CRM. We can take this one idea out and apply it in different ways too. And that that’s one example of that would be when we are producing training data for a semantic model, let’s say. If we had done the work to have a reliable estimate of the impact or the bias based on a product’s position in a listing, then we can take the data from logs and make training data that is less biased to feed into our semantic models. So this idea of the propensity of observation is what I’m talking about here. 

And finally a takeaway is that, we can explore using semantic models to try to link the performance of an often observed product with a new product to try to mitigate this cold start problem we often face. Navigation signals. This is about the idea that we, correctly, pay a lot of attention to engagement and conversion signals having to do with products, clicks, carts, purchases, remove from cart, those kinds of things. But I’m making the argument here that navigation signals like choosing menu items and refinements and so on, those are important too. And we can take advantage of those, we’ll talk about how we might accomplish that. 

So here’s an example of browsing a category, filters, these are actually air filters for furnaces and so on. And so I did this browse and I’m showing you in two columns here, the refinements that were presented to me. Actually, its one long column, but I cut it partway just to show you how many there are, and I couldn’t fit them all. I’m showing you a chunk of them, but there were 19 facets on this response. 

And boy, that’s a lot of scrolling to see what all the choices are for narrowing down these results. And of course this matters when you have a lot of products, you can see by the facet counts here that we’ve got hundreds and hundreds of these filters to consider. 

So let’s treat how these facets are used leading up to some kind of engagement or conversion such as adding to cart. Let’s treat that as a vote for the importance or usefulness of that particular facet that was applied and consider boosting those up to the top of the list. We might even filter some of these facets out if we’re finding that they’re just not being used. So this kind of signal, we can identify the signal in our logs where you see a browse pattern and you can follow through the actions in that browsing session, but stuff that happened within that category and find signals of engagement and conversion that occur after applying certain refinements, and check that you didn’t leave that refinement and so on. 

And so that’s a signal of a usefulness, I’m gonna call it, of that facet, but this type of interaction probably suffers from the same type of position bias as we have with product listings. So it would make sense that if the current system is presenting the facets in some fixed order, let’s just say alphabetical, then we might notice boy, the A’s and B’s, and C’s facets sure get used a lot more than the others. So we need some way again, to account for that type of position bias and weight these signals appropriately, so that we don’t just repeat what’s already in place. The thing I’m investigating now is, what is the actual impact of facet re-ranking. So it’s great to try to push the facets that are most useful to our shoppers up to the top. It’s kind of removing some friction, so they don’t have to put so much effort into finding the thing they need, but let’s measure the actual impact of that. And so figuring out how to do that and take into account bias is important too. 

Okay, those are three ideas for how thinking about signaling theory can help us start down some different types of paths and improving experience that maybe we wouldn’t have thought of. And some additional reading here, these articles are fascinating to me and related to the things we’ve just been talking about. So, hopefully you’ll find some of those worthwhile too. Thank you.

Play Video