Global AI and Data Science

 View Only
Expand all | Collapse all

RoBERTa: a Robustly Optimized BERT Pretraining Approach

  • 1.  RoBERTa: a Robustly Optimized BERT Pretraining Approach

    Posted Wed September 04, 2019 12:55 AM
    Edited by System Fri January 20, 2023 04:09 PM
    How do you optimize language model pre-training when training tends to be computationally expensive and executed on differing datasets? Maybe RoBERTa has the answers. Facebook's pre-training recipe appears to have greatly improved on BERT's bench-marking performance. What do you think is in store for RoBERTa?



    ------------------------------
    William Roberts
    ------------------------------
    #GlobalAIandDataScience
    #GlobalDataScience


  • 2.  RE: RoBERTa: a Robustly Optimized BERT Pretraining Approach

    Posted Mon September 23, 2019 06:31 PM
    RoBERTa is surely going to drop out of SOTA, soon!

    ------------------------------
    William Roberts
    ------------------------------