Skip to content

Create dataset unsupervised_cross_lingual_representation_learning_at_scale #253

Open
@albertvillanova

Description

@albertvillanova
  • uid: unsupervised_cross_lingual_representation_learning_at_scale
  • type: processed
  • description:
    • name: Unsupervised Cross-lingual Representation Learning at Scale
    • description: This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6% average accuracy on XNLI, +13% average F1 score on MLQA, and +2.4% F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7% in XNLI accuracy for Swahili and 11.4% for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing perlanguage performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code, data and models publicly available
    • homepage: https://metatext.io/datasets/cc100-nepali
    • validated: True
  • languages:
    • language_names:
      • Indic
      • Nepali (individual language)
      • Nepali (macrolanguage)
    • language_comments: Devanagari
    • language_locations:
      • Southern Asia
      • Nepal
    • validated: False
  • custodian:
    • name: Common Crawl
    • in_catalogue:
    • type: A library, museum, or archival institute
    • location: United States of America
    • contact_name: Alexis Conneau and Guillaume Wenzek
    • contact_email:
    • contact_submitter: False
    • additional: [email protected]
    • validated: False
  • availability:
    • procurement:
    • licensing:
    • pii:
      • has_pii: Yes
      • generic_pii_likely: very likely
      • generic_pii_list:
      • numeric_pii_likely: very likely
      • numeric_pii_list:
      • sensitive_pii_likely: very likely
      • sensitive_pii_list:
      • no_pii_justification_class:
      • no_pii_justification_text:
    • validated: False
  • processed_from_primary:
    • from_primary: Taken from primary source
    • primary_availability: Yes - they are fully available
    • primary_license: Yes - the dataset curators have obtained consent from the source material owners
    • primary_types:
      • web | other
      • web | forum
      • web | content repository, archive, or collection
    • validated: False
    • from_primary_entries:
  • media:
    • category:
      • text
    • text_format:
      • .TXT
    • audiovisual_format:
    • image_format:
    • database_format:
      • .7Z
    • text_is_transcribed: No
    • instance_type: webpage
    • instance_count: 1M<n<1B
    • instance_size: 100<n<10,000
    • validated: False
  • fname: unsupervised_cross_lingual_representation_learning_at_scale.json

Metadata

Metadata

Labels

Type

No type

Projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions