As an Amazon Associate I earn from qualifying purchases.

Amazon releases data set of annotated conversations to aid development of socialbots

[ad_1]

Today I am happy to announce the public release of the Topical Chat Dataset, a text-based collection of more than 235,000 utterances (over 4,700,000 words) that will help support high-quality, repeatable research in the field of dialogue systems.

The goal of Topical Chat is to enable innovative research in knowledge-grounded neural response-generation systems by tackling hard challenges that are not addressed by other publicly available datasets. Those challenges, which we have seen universities begin to tackle in the Alexa Prize Socialbot Grand Challenge, include transitioning between topics in a natural manner, knowledge selection and enrichment, and integration of fact and opinion into dialogue.

Each conversation in the data set refers to a group of three related entities, and every turn of conversation is supported by an extract from a collection of unstructured or loosely structured text resources. To our knowledge, Topical Chat is the largest social-conversation and knowledge dataset available publicly to the research community.

Both the conversations themselves and the annotations linking them to particular knowledge sources were provided by workers recruited through Mechanical Turk. The data set does not include any conversations between Alexa and Alexa customers.

To build the Topical Chat Dataset, workers recruited throught Mechanical Turk engaged in instant-message conversations (right) in which they substantiated their assertions with information exracted from a collection of unstructured or loosely structured text resources (left).

To build the data set, we first identified 300 named entities in eight different topic categories that came up frequently in conversations with Alexa Prize socialbots. Then we clustered the named entities into groups of three, based on their co-occurrence in information sources. One information source, for instance, mentioned three entities on our list — Star Wars, planet, and earth — so they became a cluster. For each entity in a cluster, we collected several additional sources of information, and we divided the information corresponding to each cluster between pairs of Mechanical Turk workers, or “Turkers”.

Sometimes, Turkers would receive the same information. Sometimes one would receive only a subset of the information received by the other. And sometimes the information would be divided between the Turkers, so that each had knowledge that complemented the other’s.

The Turkers were then asked to carry on instant-messaging conversations about the knowledge sets they’d received. For each of their own messages, they were asked to document where they found the information they used and to gauge the message’s sentiment — happy, sad, curious, fearful, and so on. For each of their interlocutors’ messages, they were asked to assess its quality — whether it was conversationally appropriate. We then winnowed the conversations using a combination of manual and automatic review.

Once we’d arrived at our final data set, we used it to train different machine learning models to produce conversational responses to input utterances. In a paper about the data set that we’re presenting this week at Interspeech, we report automated and human evaluation of all three models’ performance, which we hope will serve as a baseline against which other research groups may measure the success of their own socialbot systems.

Acknowledgments: This project came to be through the efforts and support of several people on the Alexa AI team. Thanks to Arindam Mandal, Raefer Gabriel, Mohammad Shami, Anu Venkatesh, Anjali Chadha, Anju Khatri, Anna Gottardi, Sanjeev Kwatra, Behnam Hedayatnia, Ben Murdoch, Karthik Gopalakrishnan, Mihail Eric, Seokhwan Kim, and Yang Liu for your work on the release.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo