metadata
license: cc-by-4.0
language:
- he
DictaBERT-Large: A State-of-the-Art BERT-Large Suite for Modern Hebrew
State-of-the-art language model for Hebrew, released here.
This is the fine-tuned BERT-large model for the named-entity-recognition task.
For the bert-base models for other tasks, see here.
For the bert-large models for other tasks, see [to-be-added].
Sample usage:
from transformers import pipeline
oracle = pipeline('ner', model='dicta-il/dictabert-large-ner', aggregation_strategy='simple')
# if we set aggregation_strategy to simple, we need to define a decoder for the tokenizer. Note that the last wordpiece of a group will still be emitted
from tokenizers.decoders import WordPiece
oracle.tokenizer.backend_tokenizer.decoder = WordPiece()
sentence = 'ืืื ืืจืืื ืฉืืฉ: ืฉืขืจ ืฉื ืกืืจืืง ืืืืืืฃ ืืขื ืืง ืืืื ืืจืื ื ืืฆืืื ืฉื ื ืืฉืืืฉื ืืฉืืงืื ืืขืืืื ืืขื ืืงื ืืืืื.'
oracle(sentence)
Output:
[
{
"entity_group": "PER",
"score": 0.9998621,
"word": "ืกืืจืืง",
"start": 22,
"end": 27
},
{
"entity_group": "PER",
"score": 0.9999503,
"word": "ืืื",
"start": 41,
"end": 44
},
{
"entity_group": "PER",
"score": 0.9998287,
"word": "ืืจืื",
"start": 46,
"end": 50
}
]
Citation
If you use DictaBERT in your research, please cite DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew
BibTeX:
@misc{shmidman2023dictabert,
title={DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew},
author={Shaltiel Shmidman and Avi Shmidman and Moshe Koppel},
year={2023},
eprint={2308.16687},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
License
This work is licensed under a Creative Commons Attribution 4.0 International License.