The Qwen3.5 Multimodal Understanding Demo, powered by Qwen3.5-2B, is now available on HF Spaces! It is a lightweight model designed for fast image and video reasoning. Built with Gradio, the demo showcases Image QA, Video QA, object detection, and 2D point tracking, along with real-time token streaming.
Cpp-Code-Large is a large-scale corpus of C++ source code comprising more than 5 million lines of C++ code. The dataset is designed to support research in large language model (LLM) pretraining, code intelligence, software engineering automation, and static program analysis for the C++ ecosystem.
By providing a high-volume, language-specific corpus, Cpp-Code-Large enables systematic experimentation in C++-focused model training, domain adaptation, and downstream code understanding tasks.
Cpp-Code-Large addresses the need for a dedicated C++-only dataset at substantial scale, enabling focused research across systems programming, performance-critical applications, embedded systems, game engines, and large-scale native software projects.
QIE-Object-Remover-Bbox Demo removes objects and artifacts from selected regions using bounding box grounding. Built on Qwen-Image-Edit-2509 with Rapid Diffusers acceleration, it delivers fast 4-step inference via the QIE-2509 adapter. 🤗🔥
Python-Code-Large is a large-scale corpus of Python source code comprising more than 2 million rows of Python code. The dataset is designed to support research in large language model (LLM) pretraining, code intelligence, software engineering automation, and program analysis for the Python ecosystem.
By providing a high-volume, language-specific corpus, Python-Code-Large enables systematic experimentation in Python-focused model training, domain adaptation, and downstream code understanding tasks.
Python-Code-Large addresses the need for a dedicated Python-only dataset at substantial scale, enabling focused research across data science, backend systems, automation, scientific computing, and AI-driven Python environments.
-> Paired the EditPlusPipeline with the Diffusers-compatible transformer weights of Rapid AIO from Qwen-Image-Edit. (experimental) -> This fusion delivers more accurate instruction following, higher image quality, and consistent visual coherence @ 4-step fast inference. -> Better maintains text styles with high fidelity, along with high-quality old photo restoration, enhancement, and best-in-class virtual try-on.
PHP-Code-Large is a large-scale corpus of PHP source code comprising more than 12 million lines of PHP code. The dataset is designed to support research in large language model (LLM) pretraining, code intelligence, software engineering automation, and static program analysis for the PHP ecosystem.
By providing a high-volume, language-specific corpus, PHP-Code-Large enables systematic experimentation in PHP-focused model training, domain adaptation, and downstream code understanding tasks.
PHP-Code-Large addresses the need for a dedicated PHP-only dataset at substantial scale, enabling focused research across backend systems, CMS platforms, APIs, and full-stack PHP environments.
if you like it give the demo a little star and send a shoutout to : @MaxLSB@jddqd and @GAD-cell for absolutely obliterating the pareto frontier of the french language understanding .
JavaScript-Code-Large is a large-scale corpus of JavaScript source code comprising around 5 million JavaScript files. The dataset is designed to support research in large language model (LLM) pretraining, code intelligence, software engineering automation, and program analysis for the JavaScript ecosystem.
By providing a high-volume, language-specific corpus, JavaScript-Code-Large enables systematic experimentation in JavaScript-focused model training, domain adaptation, and downstream code understanding tasks.
JavaScript-Code-Large addresses the need for a dedicated JavaScript-only dataset at substantial scale, enabling focused research across frontend, backend, and full-stack JavaScript environments. .
Java-Code-Large is a large-scale corpus of publicly available Java source code comprising more than 15 million java codes. The dataset is designed to support research in large language model (LLM) pretraining, code intelligence, software engineering automation, and program analysis.
By providing a high-volume, language-specific corpus, Java-Code-Large enables systematic experimentation in Java-focused model training, domain adaptation, and downstream code understanding tasks.
Dropping the Qwen3 VL Series of Unredacted MAX-VL models. These models have undergone multi-stage training to minimize refusal rates through continuous abliterated optimization. You can find the models in BF16, FP8-Dynamic, and GGUF formats at the links below.🔥🚀
Introducing FLUX.2-Klein-LoRA-Studio, a demo for image editing using specialized LoRA adapters built for the FLUX.2-Klein-Distilled model. It features an edit-style gallery for multi-style image editing, including de-light, face swap, mannequin, and more. Try the demo below.
GLM OCR, a multimodal OCR model for complex document understanding, built on the GLM-V encoder–decoder architecture. It delivers high accuracy and strong generalization with a blazing-fast inference pipeline. The demo is live . Try it now. 🤗🚀
Introducing the Qwen-Image-Edit-3D-Lighting-Control app, featuring 8× horizontal and 3× elevational lighting positions for precise 3D lighting control. It enables studio-level lighting using fast Qwen Image Edit fast inference, paired with Multi-Angle-Lighting adapters. 🔦
Daggr UI version of the Qwen3-TTS demo.🔥 (custom voice, voice design, qwen3-asr and voice cloning) nodes. No remote spaces used for API inference; all functions run in-app fn. Powered by t4-m and built with daggr@0.5.2 and gradio@6.
Qwen-Image-Edit-Object-Manipulator Space is now featured in Hugging Face Space of the Week. It enables object manipulation such as extracting objects, adding designs, and removing objects or designs from the red highlighted area using specialized adapters.
🏙️ Hugging Face Community Post Title: 🧬 Experimenting with "Dynamic Chaos" in Tamil SLMs
Hi everyone! I just published a new experimental study on Small Language Model (SLM) resilience.
I took the Qwen2.5-0.5B model and put it through a "Chaos Phase" to see how much weight data a tiny model can lose before its understanding of classical Tamil grammar breaks.
Key highlights of the study:
Target Data: Fine-tuned on the Thirukkural (1,330 couplets + modern explanations). The Chaos Step: Applied 20% random weight pruning but implemented "Layer Protection" for the Token Embeddings and LM Head to keep the characters readable. Compression: 4-bit (Q4_K_M) quantization for extreme efficiency. Result: A surrealist classical Tamil model that is ultra-light (~300MB) and ultra-fast!
Introducing QIE-2511-Zoom-Master for highlight-guided area zoom-in, enabling lossless zooming within a drawn square area, and QIE-2511-Object-Remover-v2 for precise object or highlight-guided area cleanup. These experimental adapters are trained based on QIE-2511. Find the adapters below.