Lecture: Fine-Tuned Sentence Transformer Model for Question Answering Task

I would like to introduce the structure and a practical application of sentence transformer models which are used for sentence- and document-level NLP tasks, such as information retrieval, text classification and question answering.

Nowadays, various pre-trained language models based on representation learning are constantly being developed and have achieved splendid performance in the application of various natural language processing (NLP) tasks by fine-tuning. Sentence Transformer models provide with the possibility to embed sentences and compare the semantic similarity between sentences by creating a siamese and triplet networks based on pre-trained Transformer models. The paper fine-tunes a Sentence Transformer model and applies it to the question answering (QA) task by answer selection. To be more specific, we use the finetuned sentence transformer model to select the proper answers to a given question from a given pool of candidate answers. The experiment results show that by fine-tuning, the accuracy can be improved significantly from 0.2664 to 0.4867. Further research should be done on fine-tuning more models and training on more domain data.

Info

Day: 2022-05-26
Start time: 14:30
Duration: 00:30
Room: Living Lab (1.34)
Track: Computational Linguistics
Language: en

Links:

Files

Feedback

Click here to let us know how you liked this event.

Concurrent Events