runtime error

Exit code: 1. Reason: 002.safetensors: 0%| | 0.00/3.96G [00:00<?, ?B/s] model-00002-of-00002.safetensors: 1%| | 29.8M/3.96G [00:01<02:19, 28.2MB/s] model-00002-of-00002.safetensors: 5%|▍ | 182M/3.96G [00:02<00:38, 99.2MB/s]  model-00002-of-00002.safetensors: 18%|β–ˆβ–Š | 725M/3.96G [00:03<00:12, 266MB/s]  model-00002-of-00002.safetensors: 25%|β–ˆβ–ˆβ–Œ | 993M/3.96G [00:05<00:14, 204MB/s] model-00002-of-00002.safetensors: 31%|β–ˆβ–ˆβ–ˆ | 1.22G/3.96G [00:06<00:13, 204MB/s] model-00002-of-00002.safetensors: 41%|β–ˆβ–ˆβ–ˆβ–ˆ | 1.62G/3.96G [00:07<00:09, 260MB/s] model-00002-of-00002.safetensors: 68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 2.69G/3.96G [00:08<00:02, 489MB/s] model-00002-of-00002.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.96G/3.96G [00:09<00:00, 716MB/s] model-00002-of-00002.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.96G/3.96G [00:09<00:00, 426MB/s] Traceback (most recent call last): File "/app/demo/gradio_demo2.py", line 356, in <module> tokenizer, model, image_processors = load_pretrained_model( File "/app/vlm_fo1/model/builder.py", line 40, in load_pretrained_model model, loading_info = OmChatQwen25VLForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 272, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4395, in from_pretrained config = cls._autoset_attn_implementation( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2112, in _autoset_attn_implementation cls._check_and_enable_flash_attn_2( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2262, in _check_and_enable_flash_attn_2 raise ValueError( ValueError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: Flash Attention 2 is not available on CPU. Please make sure torch can access a CUDA device.

Container logs:

Fetching error logs...