As an Amazon Associate I earn from qualifying purchases.

Biased graph sampling for better related-product recommendation

[ad_1]

E-commerce sites often recommend products that are related to customer queries — phone cases for someone shopping for a phone, for instance. Information about product relationships is frequently represented by graphs with directed edges, meaning that the relationships represented by the edges (can) flow in only one direction: it makes sense to recommend a case to someone shopping for a phone, for instance, but someone shopping for a case probably doesn’t need a phone recommendation.

In a paper we presented last year at the European Conference on Machine Learning (ECML), we showed that graph neural networks can capture the directionality of product similarity graphs by using dual embeddings (vector representations) for each graph node: one embedding represents the node as recommendation source, the other as recommendation target.

At center is a graph indicating the relationships between cell phones and related products such as a case, a power adaptor, and a screen guard. At left is a schematic illustrating the embedding (vector representation) of node A in a traditional graph neural network (GNN); at right is the dual embedding of A, as both a recommendation target (A-t) and a recommendation source (A-s), in BLADE.

At this year’s ACM Conference on Web Search and Data Mining (WSDM), we expanded on that work with a new approach to embedding the nodes of directed graphs. Specifically, we tailor the embedding procedure to the degree of the graph node, or how many connections it has to other nodes. This allows us to leverage the centrality of highly connected nodes while ranging farther afield to gather information about sparsely connected nodes.

Complementry-Product Graph.png

Related content

Dual embeddings of each node, as both source and target, and a novel loss function enable 30% to 160% improvements over predecessors.

In experiments, we compared the performance of our new model to those of three state-of-the-art predecessors, on six different public datasets, with three different numbers of recommendations per query (5, 10, and 20). Our model outperformed the others across the board; its edge over the second-best performer ranged from 4% to 230%, as measured by hit rate and mean reciprocal rank.

Graph neural networks

Graph neural networks (GNNs) are neural networks that take graphs as input and output embeddings for each graph node that capture information not only about that node but also about its relationships to other nodes. Those embeddings can be used for a variety of tasks, such as link prediction, anomaly detection — or, in our case, related-product recommendation.

GNN embeddings are iterative: first, the network embeds each node on the basis of its associated information — here, product information; then it re-embeds each node based on both its own first-round embedding and those of the nodes connected to it. This process can repeat indefinitely, expanding the neighborhood of the embedded node to two hops, three hops — up to the size of the entire graph.

Diagram of the CPR model architecture.

Related content

New modeling approach increases accuracy of recommendations by an average of 7%.

For graphs with many densely connected (high-degree) nodes, it may be impractical to factor all of a node’s neighbors into its embedding. In such cases, the GNN will typically sample the neighbors at each iteration of the embedding procedure.

In the typical implementation of a GNN, the size of each node’s neighborhood — the number of hops that factor into its embedding — is fixed. That number is often one or two. Usually, the node sampling is also uniform: each of a given node’s neighbors has an equal probability of factoring into the node’s embedding.

This approach has limitations. For a high-degree node, a one- or two-hop embedding may be adequate: the immediate neighborhood contains enough information to characterize the node. But for a low-degree node, it may be necessary to follow a longer chain of connections to gather enough information to produce a useful embedding.

By the same token, if the node being embedded is connected to both a high-degree and a low-degree node, sampling the high-degree node will generally be more productive, since its embedding incorporates more information about the neighborhood. Uniform sampling thus misses an opportunity to enrich a node’s embedding.

Our approach, which we call BLADE, for biased locally adaptive direction aware, addresses both these limitations. It begins with the framework we presented previously, which produces source and target embeddings for each node.

CORD graph 16x9.png

Related content

Research investigates how to construct recommendation algorithms when the search space is massive and how to perform natural-language searches on the COVID-19 literature.

The scope of its embeddings, however, varies according to the in-degree — the degree of the inbound edges — of the node being embedded. In the paper, we show how to compute the size of the neighborhood using a power law distribution that factors in the node’s in-degree and the minimum in-degree of all nodes in the graph. We also show how to estimate the power law coefficient by considering the in-degrees of all the nodes in the graph.

We also provide a mechanism for weighting the probability of sampling a nodes’ neighbors during the embedding process by factoring in those nodes’ degrees, both inbound and outbound.

In addition to testing our approach on the six public datasets, we also tested it on two large internal datasets. There, the improvements offered by our model were just as dramatic, ranging from 40% to 214% compared to the second-best performer. You can find more details in our paper.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo