A Study of Pre-trained Language Models in Natural Language Processing

  • Jiajia Duan
  • , Hui Zhao
  • , Qian Zhou
  • , Meikang Qiu
  • , Meiqin Liu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

28 Scopus citations

Abstract

Pre-trained Language Model (PLM) is a very popular topic in natural language processing (NLP). It is the rapid development of pre-trained language models (PLMs) that has led to the achievements of natural language today. In this article, we give a review of important PLMs. First, we generally introduce the development history and achievements of PLMs. Second, we present several extraordinary PLMs, including BERT, the variants of BERT, Multimodal PLMs, PLMs combined with Knowledge Graph and PLMs applied to natural language generation. In the end, we summarize and look into the future of PLMs. We expect this article will provide a practical guide for learners to understanding, using and developing PLMs with the abundant literature existing for various NLP tasks.

Original languageEnglish
Title of host publicationProceedings - 2020 IEEE International Conference on Smart Cloud, SmartCloud 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages116-121
Number of pages6
ISBN (Electronic)9781728165479
DOIs
StatePublished - Nov 2020
Externally publishedYes
Event5th IEEE International Conference on Smart Cloud, SmartCloud 2020 - Washington, United States
Duration: 6 Nov 20208 Nov 2020

Publication series

NameProceedings - 2020 IEEE International Conference on Smart Cloud, SmartCloud 2020

Conference

Conference5th IEEE International Conference on Smart Cloud, SmartCloud 2020
Country/TerritoryUnited States
CityWashington
Period6/11/208/11/20

Keywords

  • BERT
  • Cross-modal
  • Embedding
  • KG
  • Natural Language Generation
  • Pre-trained

Fingerprint

Dive into the research topics of 'A Study of Pre-trained Language Models in Natural Language Processing'. Together they form a unique fingerprint.

Cite this