runtime error

Exit code: 1. Reason: ▎ | 2/6 [00:10<00:19, 4.99s/it] Downloading shards: 50%|█████ | 3/6 [00:15<00:15, 5.17s/it] Downloading shards: 67%|██████▋ | 4/6 [00:19<00:09, 4.82s/it] Downloading shards: 83%|████████▎ | 5/6 [00:23<00:04, 4.47s/it] Downloading shards: 100%|██████████| 6/6 [00:27<00:00, 4.39s/it] Downloading shards: 100%|██████████| 6/6 [00:27<00:00, 4.63s/it] Loading checkpoint shards: 0%| | 0/6 [00:00<?, ?it/s] Loading checkpoint shards: 100%|██████████| 6/6 [00:01<00:00, 5.42it/s] Loading checkpoint shards: 100%|██████████| 6/6 [00:01<00:00, 5.42it/s] Traceback (most recent call last): File "/home/user/app/app.py", line 9, in <module> tokenizer = AutoTokenizer.from_pretrained("instruction-pretrain/instruction-synthesizer") File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 897, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2271, in from_pretrained return cls._from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2505, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 157, in __init__ super().__init__( File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 134, in __init__ raise ValueError( ValueError: Couldn't instantiate the backend tokenizer from one of: (1) a `tokenizers` library serialization file, (2) a slow tokenizer instance to convert or (3) an equivalent slow tokenizer class to instantiate and convert. You need to have sentencepiece installed to convert a slow tokenizer to a fast one.

Container logs:

Fetching error logs...