runtime error
Downloading (…)lve/main/config.json: 0%| | 0.00/29.0 [00:00<?, ?B/s] Downloading (…)lve/main/config.json: 100%|██████████| 29.0/29.0 [00:00<00:00, 92.7kB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 8, in <module> tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True) File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 754, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/user/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1838, in from_pretrained raise EnvironmentError( OSError: Can't load tokenizer for 'TheBloke/Llama-2-7B-Chat-GGML'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'TheBloke/Llama-2-7B-Chat-GGML' is the correct path to a directory containing all relevant files for a LlamaTokenizerFast tokenizer.
Container logs:
Fetching error logs...