As an Amazon Associate I earn from qualifying purchases.

SIGIR: How information retrieval and natural-language processing overcame their rivalry

[ad_1]

SIGIR, the conference of the Association for Computing Machinery’s Interest Group on Information Retrieval, begins next week. Alessandro Moschitti, a principal scientist in the Alexa AI organization, knows the conference well, having attended for the first time in 2001 and served for the past several years on the SIGIR Senior Committee.

As an autonomous discipline, Moschitti says, information retrieval (IR) is generally traced to Gerard Salton, a computer science professor at Cornell University who in the 1960s created the first dedicated information retrieval research group. From the outset, Moschitti says, IR was marked by a rivalry with another young discipline, artificial intelligence.

Alessandro Moschitti, principal scientist with Alexa AI

In part, Moschitti says, that may have been because researchers in the two fields were competing for funding. But there was also a fundamental difference in their technical approaches: “IR was more statistical, more quantitative, while AI was pretty much logic-based,” Moschitti says.

Moschitti says that when he began attending SIGIR in 2001, that rivalry was alive and well, although the part played by AI had descended to natural-language processing (NLP), which had emerged from AI as its own discipline.

There was a clear overlap between NLP, which sought to process requests formulated in natural language, and IR, which automatically indexed or ranked search results according to their content. But at that point, NLP still relied principally on rule-based systems, while IR had continued to develop more-effective statistical and probabilistic methods.

“NLP people were saying, ‘We can do semantic analysis and build a semantic search engine,’” Moschitti says, “and the ones from IR were saying, ‘Look, we tried that approach and it performs much worse than our models.’ ‘Then we can do WordNet or semantic nets.’ ‘No, no, it’s better to apply stemming to words.’ ‘Okay, let’s use named-entity recognition or syntactic parsing to extract noun compounds.’ ‘No, we can just measure the distance between words, and this works much better than your named entities, your parsing.’”

Mending fences

The first sign of rapprochement between the two disciplines, Moschitti says, came a few years later, when researchers began to make breakthroughs in sentiment analysis, or determining a speaker’s attitude toward a topic under discussion. The ability to classify documents — reviews, say — according to their sentiments proved useful to IR researchers.

“The initial failure of NLP for IR was that document retrieval didn’t really need advanced NLP techniques,” Moschitti explains. “It wasn’t this that changed. What changed is the use of NLP for new applications that were not known at the time.”

Modern information retrieval systems, for instance, no longer simply return links to documents, Moschitti says. Instead, they often return sets of salient facts, extracted from the documents and labeled according to content type, or excerpts from the documents that users are likely to find helpful.

“This new kind of output from a search engine — which is at the core of IR — is actually putting together IR and something else,” Moschitti says. “It’s a kind of information composition or information production, and for this you really need NLP techniques — for example, information extraction.”

Then, over the past seven or eight years, came the deep-learning revolution. For NLP, a major implication of that revolution has been the near universal reliance on embeddings, which represent words or sequences of words as points in a vector space. In many applications, proximity in the embedding space indicates similarity of meaning, based on words’ co-occurrence with other words in training texts.

IR, too, has come to rely heavily on embeddings produced by neural networks. But that reliance left most of the existing IR machinery unchanged.

That’s because IR researchers had depended on vector representations for decades. The work that earned Salton the title “father of information retrieval” was precisely a system for encoding both queries and documents as vectors, based on the relative frequency with which particular terms occurred in individual documents and in large corpora of documents.

Historically, natural-language-processing researchers focused on semantic retrieval (top), which sought to match the semantic structure of queries to semantic relationships encoded in a knowledge base, while information retrieval researchers focused on vector space models (bottom), which represented search terms as points in a vector space. On the vector space model, the angle between two vectors represented the semantic similarity between the associated terms.

Credit: Stacy Reilly

“This is what IR people have been doing since the beginning,” Moschitti says. “Their main approaches are based on vectors. So the neural world wasn’t so closed to the IR community. They could more quickly appreciate embedding, vector representations of text. For them it was completely fine.”

“Now NLP and IR are even closer because they used the same tools,” Moschitti adds. “If you go to an IR conference, 90 percent of what you find regarding text will overlap with papers you can find at ACL [the annual meeting of the Association for Computational Linguistics].”

As a case in point, Moschitti points out that one of his own papers at this year’s SIGIR is a follow-up on work he reported earlier this year, at the annual meeting of the Association for the Advancement of Artificial Intelligence (AAAI).

The AAAI paper describes a question-answering system that halved the error rate on a benchmark data set, but it required a computationally expensive neural network called the Transformer. In the SIGIR paper, Moschitti and his colleagues describe how to use a faster neural network to produce a short list of candidate answers to a question, which dramatically reduces the computational burden on the Transformer.

Between AAAI and SIGIR, however, at ACL, Moschitti and Luca Soldaini, an applied scientist on his team at Amazon, presented a more general version of this system, which uses a stack of question-answering models, arranged in a hierarchy inside the Transformer itself. The system, which they call the Cascade Transformer, applies a sequence of models of increasing complexity and accuracy to candidate answers to a question. Adjusting the number of candidates flowing from each model to the next enables the system to trade off speed and accuracy.

That’s a single line of research that spawned papers at three different conferences: one on AI, one on computational linguistics, and one on information retrieval.

“Now the fields are very, very similar,” Moschitti says.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo