dictabert-large-ner / README.md
Shaltiel's picture
Update README.md
d4a3ab4
|
raw
history blame
No virus
2.3 kB
metadata
license: cc-by-4.0
language:
  - he

DictaBERT-Large: A State-of-the-Art BERT-Large Suite for Modern Hebrew

State-of-the-art language model for Hebrew, released here.

This is the fine-tuned BERT-large model for the named-entity-recognition task.

For the bert-base models for other tasks, see here.

For the bert-large models for other tasks, see [to-be-added].

Sample usage:

from transformers import pipeline

oracle = pipeline('ner', model='dicta-il/dictabert-large-ner', aggregation_strategy='simple')

# if we set aggregation_strategy to simple, we need to define a decoder for the tokenizer. Note that the last wordpiece of a group will still be emitted
from tokenizers.decoders import WordPiece
oracle.tokenizer.backend_tokenizer.decoder = WordPiece()

sentence = 'ื”ื›ื™ ื“ืจืžื˜ื™ ืฉื™ืฉ: ืฉืขืจ ืฉืœ ืกื“ืจื™ืง ื”ืžื—ืœื™ืฃ ื”ืขื ื™ืง ืœื–ื™ื• ืืจื™ื” ื ื™ืฆื—ื•ืŸ ืฉื ื™ ื‘ืฉืœื•ืฉื” ืžืฉื—ืงื™ื ื•ืขืœื™ื™ื” ืžืขืœ ื”ืงื• ื”ืื“ื•ื.'
oracle(sentence)

Output:

[
  {
    "entity_group": "PER",
    "score": 0.9998621,
    "word": "ืกื“ืจื™ืง",
    "start": 22,
    "end": 27
  },
  {
    "entity_group": "PER",
    "score": 0.9999503,
    "word": "ืœื–ื™",
    "start": 41,
    "end": 44
  },
  {
    "entity_group": "PER",
    "score": 0.9998287,
    "word": "ืืจื™ื”",
    "start": 46,
    "end": 50
  }
]

Citation

If you use DictaBERT in your research, please cite DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew

BibTeX:

@misc{shmidman2023dictabert,
      title={DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew}, 
      author={Shaltiel Shmidman and Avi Shmidman and Moshe Koppel},
      year={2023},
      eprint={2308.16687},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

License

Shield: CC BY 4.0

This work is licensed under a Creative Commons Attribution 4.0 International License.

CC BY 4.0