Python

from mlx_vlm import load, generate

model, processor = load("krzonkalla/test-quant-mlx-rio-mini")

# Apenas texto
output = generate(
    model,
    processor,
    prompt="Me explique brevemente a Teoria da Relatividade Geral.",
    max_tokens=64000,
)
print(output)

# Com imagens
output = generate(
    model,
    processor,
    prompt="Me diga o que há nessa imagem.",
    image="path/to/image.png",
    max_tokens=64000,
)
print(output)

Nota

O processador de vídeo do Qwen3.5 depende do torchvision. Para uso só com imagens, o mlx-vlm funciona sem o PyTorch. Para habilitar suporte a vídeo, instale o torch e o torchvision.

Downloads last month
28
Safetensors
Model size
5B params
Tensor type
BF16
·
U32
·
F32
·
MLX
Hardware compatibility
Log In to add your hardware

3-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for krzonkalla/test-quant-mlx-rio-mini

Quantized
(83)
this model