Joonwon Jang
/
Paper Review PPT (개인 발표자료)
/
A Robustly Optimized BERT Pretraining Approach (RoBERTa) & A Lite BERT for Self-supervised Learning of Language Representations (ALBERT)
Search
Share
A Robustly Optimized BERT Pretraining Approach (RoBERTa) & A Lite BERT for Self-supervised Learning of Language Representations (ALBERT)
Tags
Transformer
BERT
RoBERTa
ALBERT
PPT
RoBERTa-A Robustly Optimized BERT Pretraining Approach&ALBERT-A Lite BERT for Self-supervised Learning of Language Representations.pdf