As an Amazon Associate I earn from qualifying purchases.

When unsupervised training pays off in natural-language processing

[ad_1]

The first step in most natural-language-processing applications is tokenization, or breaking input strings into semantically relevant units. In many applications, these units are smaller than individual words. For instance, search results that are a good match for the query “word processing” might use the phrase “word processor”, which shares some but not all of the query’s subword units.

Traditionally, tokenizers have been built or trained using manually compiled lexicons — which contain information about words’ prefixes, stems, and suffixes — and data that has been hand-tokenized by human annotators. We refer to this method as language-specific tokenization (LST).

More recently, however, natural-language-processing researchers have been experimenting with systems that learn tokenization units by analyzing large bodies of unlabeled data. The obvious advantage of this approach is that it doesn’t depend on lexicons or manually tokenized corpora, which have to be created independently for every language or domain in which we want to apply the tokenizer.

Language-independent tokenization (LIT) systems, which are trained without the benefit of manually compiled lexicons, sometimes learn illogical word breaks (k/id, to/ys) that language-specific tokenization (LST) systems avoid (kid, toy/s). But when LIT tokens are embedded, or converted into fixed-length vectors, they still prove useful for search tasks that match texts according to semantic content.

Credit: Glynis Condon

Moreover, because we do not rely on a precompiled, fixed dictionary, we have a better chance of accurately tokenizing words that the tokenizer has never seen before. We refer to this method as language-independent tokenization (LIT).

LIT has had some success in applications such as machine translation systems, which often have restricted vocabularies for reasons of processing speed. However, the relative benefits of LST and LIT in broader natural-language-processing (NLP) applications remains unclear.

In a paper accepted to the Language Resources and Evaluation Conference, which was to be held last week, we compare LST and LIT methods across eight languages (English, German, Spanish, Farsi, Italian, Japanese, Turkish, and Thai), with varying vocabulary sizes.

We find that while LST still tends to work better at larger vocabulary sizes, LIT is competitive — and in some languages, superior — at small (e.g., less than 50,000 subwords) vocabulary sizes. This suggests that LIT is a viable option for applications with limited vocabularies or for languages where well-organized lexical data is not readily available.

Semantic similarity

In our experiments, we tokenized the corpus for each language using both LIT and LST methods and learned subword embeddings over the tokenized corpora. An embedding is a representation of a string of text as a fixed-length vector — a point in a multidimensional space — such that embeddings of related words or phrases are close to each other in the space. Embeddings thus capture something of the text strings’ semantic content. To learn subword embeddings, we used the global vector prediction (GloVe) method.

Next, we created word embeddings from the subword embeddings in three different ways: unweighted averaging; weighted averaging; and smoothed-inverse-frequency-based (SIF-based) weighting, which has previously been proposed for creating sentence embeddings from word embeddings.

We then measured the semantic similarity between two words as the cosine similarity between the corresponding word embeddings. Finally, we computed the correlation between the predicted similarity scores and similarity ratings provided by human annotators for the same word-pairs. A high degree of correlation would indicate that tokenization preserves words’ semantic information, which is desirable to any downstream NLP applications that relie on the tokenization.

For LIT, we used two different approaches to tokenization. One is based on byte pair encoding (BPE), which was originally a data compression technique. BPE scours training texts for the most common symbol pair (in English, for instance, er is extremely common), which it represents using a single symbol. Then it repeats the process, continually adding new symbols that stand for longer and longer strings, up to some predefined limit.

The other approach is based on unigram language models (LMs). This approach begins with a repertory of individual symbols and common substrings, and it begins to assemble them into new substrings according to their frequency of occurrence in some corpus. Again, the process ends when the number of substrings reaches a predefined limit.

Variable vocabularies

We trained each of our three tokenization systems on different-sized subsets of the vocabularies for all eight languages. The LST models were trained on vocabularies ranging in size from 50,000 to 10 million words. The LM models were trained on vocabularies ranging from 20,000 to a million words.

Training BPE models is extremely time consuming, so the largest subsets we could use had 100,000 words. The smallest had 20,000.

In our experiments, we found that an LST tokenizer trained on a vocabulary of a million words or more generally offered the best performance. But there were three exceptions.

One was German, where the LM model based on a million-word vocabulary performed best. The other two were Farsi and Turkish, where, remarkably, the BPE models trained on 100,000 and 50,000 words, respectively, performed best. We suspect that this is because all three languages are highly “agglutinative”: that is, they can accommodate ad hoc or infrequent compounds that won’t show up in standard lexicons.

In general, however, at vocabularies of 100,000 words or fewer, both LIT models outperformed the LST model across the board. This suggests that for under-resourced languages or applications that rely on limited vocabularies, LIT may be an attractive alternative to LST.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo