Skip to content

Files

Latest commit

9d89e25 · Nov 25, 2021

History

History
58 lines (33 loc) · 1.56 KB

README.md

File metadata and controls

58 lines (33 loc) · 1.56 KB

AlephBERT

overview

A large Pre-trained language model for Modern Hebrew

Based on BERT-base training, 12 hidden layers, with 52K vocab size.

Trained on 95M sentences from OSCAR+Wikipedia+Tweeter data, 10 epochs.

Evaluation

We evaluated AlephBERT for the following prediction tasks:

  • Morphological Segmentation
  • Part of Speech Tagging
  • Morphological Features
  • Named Entity Recognition
  • Sentiment Analysis

On four different benchmarks:

  • The SPMRL Treebank (for: Segmentation, POS, Feats, NER)
  • The Universal Dependency Treebanks (for: Segmentation, POS, Feats, NER)
  • The Hebrew Facebook Corpus (for: Sentiment Analysis)

Citation

@misc{alephBert2021,

  title={AlephBERT: a Pre-trained Language Model to Start Off your Hebrew NLP Application}, 
  
  author={Amit Seker, Elron Bandel, Dan Bareket, Idan Brusilovsky, Shaked Refael Greenfeld, Reut Tsarfaty},
  
  year={2021}

}

Contributors:

The ONLP Lab at Bar Ilan University

PI: Prof. Reut Tsarfaty

Contributor: Amit Seker, Elron Bandel, Dan Bareket, Idan Brusilovsky, Shaked Refael Greenfeld

Advisors: Dr. Roee Aharoni, Prof. Yoav Goldberg

Credits