How to Train good Word Embeddings for Biomedical NLP

Billy CHIU, Gamal CRICHTON, Anna KORHONEN, Sampo PYYSALO

Research output: Book Chapters | Papers in Conference ProceedingsConference paper (refereed)Researchpeer-review

Abstract

The quality of word embeddings depends on the input corpora, model architectures, and hyper-parameter settings. Using the state-of-the-art neural embedding tool word2vec and both intrinsic and extrinsic evaluations, we present a comprehensive study of how the quality of embeddings changes according to these features. Apart from identifying the most influential hyper-parameters, we also observe one that creates contradictory results between intrinsic and extrinsic evaluations. Furthermore, we find that bigger corpora do not necessarily produce better biomedical domain word embeddings. We make our evaluation tools and resources as well as the created state-of-the-art word embeddings available under open licenses from https://github.com/cambridgeltl/BioNLP-2016.
Original languageEnglish
Title of host publicationProceedings of the 15th Workshop on Biomedical Natural Language Processing
EditorsKevin Bretonnel COHEN, Dina DEMNER-FUSHMAN, Sophia ANANIADOU, Jun-ichi TSUJII
PublisherAssociation for Computational Linguistics (ACL)
Pages166–174
Number of pages9
ISBN (Electronic)9781945626128
DOIs
Publication statusPublished - Aug 2016
Externally publishedYes
EventThe 15th Workshop on Biomedical Natural Language Processing - Berlin, Germany
Duration: 12 Aug 201612 Aug 2016
https://aclanthology.org/volumes/W16-29/

Conference

ConferenceThe 15th Workshop on Biomedical Natural Language Processing
Country/TerritoryGermany
CityBerlin
Period12/08/1612/08/16
Internet address

Bibliographical note

This work has been supported by Medical Research Council grant MR/M013049/1

Fingerprint

Dive into the research topics of 'How to Train good Word Embeddings for Biomedical NLP'. Together they form a unique fingerprint.

Cite this