As an Amazon Associate I earn from qualifying purchases.

More-efficient approximate nearest-neighbor search

[ad_1]

Many of today’s machine learning (ML) applications involve nearest-neighbor search: data are represented as points in a high-dimensional space; a query (say, a photograph or text string to be matched to a data point) is embedded in that space; and the data points closest to the query are retrieved as candidate solutions.

Often, however, computing the distance between the query and every point in the dataset is prohibitively time consuming, so model builders instead use approximate nearest-neighbor search techniques. One of the most popular of these is graph-based approximation, in which the data points are organized into a graph. The search algorithm traverses the graph, regularly updating a list of the points nearest the query that it has encountered so far.

Hache-cache.jpeg

Related content

Locality-sensitive hashing enables cache to hold more than three times as many query results.

In a paper we presented at this year’s Web Conference, we describe a new technique that makes graph-based nearest-neighbor search much more efficient. The technique is based on the observation that, when calculating the distance between the query and points that are farther away than any of the candidates currently on the list, an approximate distance measure will usually suffice. Accordingly, we propose a method for computing approximate distance very efficiently and show that it reduces the time required to perform approximate nearest-neighbor search by 20% to 60%.

Graph-based search

Broadly speaking, approximate k-nearest-neighbor search algorithms — which find the k neighbors nearest the query vector — fall into three categories: quantization methods, space-partitioning methods, and graph-based methods. On several benchmark datasets, graph-based methods have yielded the best performance so far.

Given the embedding of a query, q, graph-based search picks a point in the graph, c, and explores all its neighbors — that is, the nodes with which it shares edges. The algorithm calculates those nodes’ distance from the query and adds the closest ones to the list of candidates. Then, from those candidates, it selects the one closest to the query and explores its neighbors, updating the list as necessary. This procedure continues until the distances between the unexplored graph nodes and the query vector begin increasing — an indication that the algorithm is leaving the neighborhood of the true nearest neighbor.

Hyperboloid animation.gif

Related content

Method using hyperboloid embeddings improves on methods that use vector embeddings by up to 33%.

Past research on graph-based approximation has concentrated on methods for assembling the underlying graph. Some methods, for instance, add connections between a given node and distant nodes, to help ensure that the search doesn’t get stuck in a local minimum; some methods concentrate on pruning highly connected nodes to prevent the same node from being visited over and over. Each of these methods has its advantages, but none is a clear winner across the board.

We instead focus on a technique that will work with all graph construction methods, since it increases the efficiency of the search process itself. We call that technique FINGER, for fast inference for graph-based approximated nearest neighbor search.

Approximating distance

Consider the case of a query vector, q, a node whose neighbors are being explored, c, and one of c’s neighbors, d, whose distance from q we wish to compute.

FINGER defines the distance between a query vector, q, and a new graph node vector, d, by reference to the vector of a previously explored node, c. Both q and c can be represented as the sums of projections along c (qproj and dproj) and “residual” vectors (qres and dres) orthogonal to c.

Both q and d can be represented as the sums of projections along c and “residual vectors” perpendicular to c. This is, essentially, to treat c as a basis vector of the space.

If the algorithm is exploring neighbors of c, that means it has already calculated the distance between c and q. In our paper, we show that, if we take advantage of that existing calculation, along with certain manipulations of node vectors’ values, which can be precomputed and stored, estimating the distance between q and d is simply a matter of estimating the angle between their residual vectors.

Distributed-data retrieval.gif

Related content

“Anytime query” approach adapts to the available resources.

And that angle, we argue, can be reasonably approximated from the angles between the residual vectors of c’s immediate neighbors — those that share edges with c in the graph. The idea is that, if q is close enough to c that c is worth exploring, then if q were part of the graph, it would probably be one of c’s nearest neighbors. Consequently, the relationships between the residual vectors of c’s other neighbors tell us something about the relationships between the residual vector of one of those neighbors — d — and q’s residual vector.

To evaluate our approach, we compared FINGER’s performance to that of three prior graph-based approximation methods on three different datasets. Across a range of different recall10@10 rates — or the rate at which the model found the query’s true nearest neighbor among its 10 top candidates — FINGER searched more efficiently than all of its predecessors. Sometimes the difference was quite dramatic — 50%, on one dataset, at the high recall rate of 98%, and almost 88% on another dataset, at the recall rate of 86%.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo