Qwen3-VL-8B-Instruct-abliterated-v2-GGUF

The Qwen3-VL-8B-Instruct-abliterated-v2 from prithivMLmods represents the second iteration (v2.0) of the abliterated variant of Alibaba's Qwen3-VL-8B-Instruct, an 8B-parameter vision-language model engineered to fully remove safety refusals and content filters through advanced abliteration techniques, delivering uncensored, highly detailed captioning, instruction-following, and multimodal reasoning across complex, sensitive, artistic, technical, abstract, or explicit visual content with Interleaved-MRoPE fusion, 32-language OCR, 262K context length, and robust support for diverse resolutions, aspect ratios, videos, and layouts. Building on v1 with refined uncensoring for even greater output fidelity and reduced artifacts, it enables variational detail control—from concise summaries to exhaustive, multi-granularity analyses—primarily in English with prompt-engineered multilingual adaptability, making it optimal for red-teaming, research in generative safety, creative visual storytelling, and unrestricted agentic applications on high-end GPUs (16-24GB VRAM BF16/FP8) via Transformers or vLLM. This version preserves the base model's state-of-the-art multimodal perception while eliminating guardrails for factual, descriptive responses in scenarios where conventional models would refuse.

Qwen3-VL-8B-Instruct-abliterated-v2 [GGUF]

File Name Quant Type File Size File Link
Qwen3-VL-8B-Instruct-abliterated-v2.IQ4_XS.gguf IQ4_XS 4.59 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.Q2_K.gguf Q2_K 3.28 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.Q3_K_L.gguf Q3_K_L 4.43 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.Q3_K_M.gguf Q3_K_M 4.12 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.Q3_K_S.gguf Q3_K_S 3.77 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.Q4_K_M.gguf Q4_K_M 5.03 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.Q4_K_S.gguf Q4_K_S 4.8 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.Q5_K_M.gguf Q5_K_M 5.85 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.Q5_K_S.gguf Q5_K_S 5.72 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.Q6_K.gguf Q6_K 6.73 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.Q8_0.gguf Q8_0 8.71 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.f16.gguf F16 16.4 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.mmproj-Q8_0.gguf mmproj-Q8_0 752 MB Download
Qwen3-VL-8B-Instruct-abliterated-v2.mmproj-f16.gguf mmproj-f16 1.16 GB Download

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
1,236
GGUF
Model size
8B params
Architecture
qwen3vl
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Qwen3-VL-8B-Instruct-abliterated-v2-GGUF

Collection including prithivMLmods/Qwen3-VL-8B-Instruct-abliterated-v2-GGUF