[ad_1]
Using machine learning to train information retrieval models — such as Internet search engines — is difficult because it requires so much manually annotated data. Of course, training most machine learning systems requires manually annotated data, but because information retrieval models must handle such a wide variety of queries, they require a lot of data. Consequently, most information retrieval systems rely primarily on mechanisms other than machine learning.
This week at the ACM’s SIGIR Conference on Research and Development in Information Retrieval, my colleagues and I will describe a new way to train deep neural information retrieval models with less manual supervision. Where standard training with annotated data is referred to as supervised, our approach is weakly supervised. Weak supervision allows us to create data sets with millions of entries, instead of the tens of thousands typical with strong supervision.
In tests, our weak-supervision technique not only yielded more-accurate retrieval models than the supervised baseline (with limited training data) but offered improvements over previous weak-supervision techniques. It also offered dramatic improvements over the type of algorithm commonly used to assess the “relevance” of search results.
Typically, neural-network-based information retrieval systems are trained on data triples: each data item in the training set consists of a query and two documents, one that satisfies the user’s information need (relevant) and one that doesn’t (but is still related to the query, non-relevant). During training, the neural network learns to maximize the difference between the scores it assigns to the relevant and non-relevant documents. Here, manual annotation means tagging documents as relevant or non-relevant to particular queries.
In our approach, we leverage the fact that news article headlines and Wikipedia entry section titles are already associated with relevant texts: the articles and sections they introduce. Headlines and titles may not look exactly like search strings, but our hypothesis was that they’re similar enough for training purposes. Training a machine learning system to find correlations between headlines and articles, we reasoned, should help it find correlations between search strings and texts.
Our first step: collect millions of document-title pairs from the New York Times’ online repository and from Wikipedia. Each document-title pair constituted two-thirds of a data triple we would use in training a machine learning system: the query and the relevant text. To round out the triples, we used an industry-standard algorithm to identify texts that are related to the query (but less relevant than the associated text). The algorithm assigns relevance scores based on the number of words in the document that also appear in the query.
As a strong baseline, we used a data set from AOL consisting of actual customer queries and search results. Here, we used the standard algorithm to identify the most relevant and non-relevant texts for each query. We also used two other baselines: a set of about 25,000 hand-annotated data triples and the application of the standard relevance algorithm to the test data.
With each of the four test sets — NYT, Wikipedia, AOL, and the hand-annotated set — we trained three different neural networks to do information retrieval and scored them using a metric called normalized discounted cumulative gain (nDCG). We used this metric to measure the cumulative relevance of the top 20 results returned by each network. Of the baselines, the combination of the AOL data set and a neural architecture called a position-aware convolutional recurrent relevance network, or PACRR, yielded the best results. But on the same architecture, our NYT data set offered a 12% increase in nDCG. (The Wikipedia data set also conferred gains, but they were less dramatic.)
Once we established the utility of our approach, we tried to improve it still further by tuning our information retrieval system to the domain of the data on which it was going to be tested. To do this, we used two different filtration techniques to limit the training data to samples similar to those in the test set.
The first technique: take some canonical examples of data from the target domain and use a representation function to map them to a multidimensional space. Then we simply selected training examples that the same function mapped to nearby points in the space and used them to re-train the information retrieval system.
The second technique was somewhat akin to adversarial training: we trained a neural network to distinguish data from the new target domain from the data originally used to train the information retrieval system. Then we kept only the training examples that received low confidence scores from the discriminator — the ones that were hard to distinguish from data in the new domain.
This approach worked best. Again, the combination of the PACRR network and the NYT data set yielded the best results. But re-training the retrieval model on data filtered using the neural discriminator boosted the nDCG score by 35%.
Acknowledgments: Sean MacAvaney, Andrew Yates, Ophir Frieder
window.fbAsyncInit = function() { FB.init({
appId : '1024652704536162',
xfbml : true, version : 'v2.9' }); };
(function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {return;}
js = d.createElement(s); js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
[ad_2]
Source link