Vortrag: Fine-Tuned Sentence Transformer Model for Question Answering Task

I would like to introduce the structure and a practical application of sentence transformer models which are used for sentence- and document-level NLP tasks, such as information retrieval, text classification and question answering.

Nowadays, various pre-trained language models based on representation learning are constantly being developed and have achieved splendid performance in the application of various natural language processing (NLP) tasks by fine-tuning. Sentence Transformer models provide with the possibility to embed sentences and compare the semantic similarity between sentences by creating a siamese and triplet networks based on pre-trained Transformer models. The paper fine-tunes a Sentence Transformer model and applies it to the question answering (QA) task by answer selection. To be more specific, we use the finetuned sentence transformer model to select the proper answers to a given question from a given pool of candidate answers. The experiment results show that by fine-tuning, the accuracy can be improved significantly from 0.2664 to 0.4867. Further research should be done on fine-tuning more models and training on more domain data.

Info

Tag: 26.05.2022
Anfangszeit: 14:30
Dauer: 00:30
Raum: Living Lab (1.34)
Track: Computational Linguistics
Sprache: en

Links:

Dateien

Feedback

Uns interessiert Ihre Meinung! Wie fanden Sie diese Veranstaltung?

Gleichzeitige Events