-
Woodpecker: Hallucination Correction for Multimodal Large Language Models
Paper • 2310.16045 • Published • 17 -
HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Paper • 2310.14566 • Published • 27 -
SILC: Improving Vision Language Pretraining with Self-Distillation
Paper • 2310.13355 • Published • 9 -
Conditional Diffusion Distillation
Paper • 2310.01407 • Published • 20
Collections
Discover the best community collections!
Collections including paper arxiv:2406.11271
-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 29 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 14 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 44 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 23
-
MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens
Paper • 2406.11271 • Published • 21 -
Diffusion Curriculum: Synthetic-to-Real Generative Curriculum Learning via Image-Guided Diffusion
Paper • 2410.13674 • Published • 17 -
Mini-Omni2: Towards Open-source GPT-4o with Vision, Speech and Duplex Capabilities
Paper • 2410.11190 • Published • 22
-
MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens
Paper • 2406.11271 • Published • 21 -
mlfoundations/MINT-1T-HTML
Viewer • Updated • 623M • 37.3k • 90 -
mlfoundations/MINT-1T-ArXiv
Viewer • Updated • 5.6M • 19.3k • 55 -
mlfoundations/MINT-1T-PDF-CC-2024-18
Updated • 8.5k • 19
-
MS MARCO Web Search: a Large-scale Information-rich Web Dataset with Millions of Real Click Labels
Paper • 2405.07526 • Published • 21 -
Automatic Data Curation for Self-Supervised Learning: A Clustering-Based Approach
Paper • 2405.15613 • Published • 17 -
A Touch, Vision, and Language Dataset for Multimodal Alignment
Paper • 2402.13232 • Published • 16 -
How Do Large Language Models Acquire Factual Knowledge During Pretraining?
Paper • 2406.11813 • Published • 31
-
World Model on Million-Length Video And Language With RingAttention
Paper • 2402.08268 • Published • 40 -
Improving Text Embeddings with Large Language Models
Paper • 2401.00368 • Published • 82 -
Chain-of-Thought Reasoning Without Prompting
Paper • 2402.10200 • Published • 109 -
FiT: Flexible Vision Transformer for Diffusion Model
Paper • 2402.12376 • Published • 48
-
Woodpecker: Hallucination Correction for Multimodal Large Language Models
Paper • 2310.16045 • Published • 17 -
HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Paper • 2310.14566 • Published • 27 -
SILC: Improving Vision Language Pretraining with Self-Distillation
Paper • 2310.13355 • Published • 9 -
Conditional Diffusion Distillation
Paper • 2310.01407 • Published • 20
-
MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens
Paper • 2406.11271 • Published • 21 -
Diffusion Curriculum: Synthetic-to-Real Generative Curriculum Learning via Image-Guided Diffusion
Paper • 2410.13674 • Published • 17 -
Mini-Omni2: Towards Open-source GPT-4o with Vision, Speech and Duplex Capabilities
Paper • 2410.11190 • Published • 22
-
MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens
Paper • 2406.11271 • Published • 21 -
mlfoundations/MINT-1T-HTML
Viewer • Updated • 623M • 37.3k • 90 -
mlfoundations/MINT-1T-ArXiv
Viewer • Updated • 5.6M • 19.3k • 55 -
mlfoundations/MINT-1T-PDF-CC-2024-18
Updated • 8.5k • 19
-
MS MARCO Web Search: a Large-scale Information-rich Web Dataset with Millions of Real Click Labels
Paper • 2405.07526 • Published • 21 -
Automatic Data Curation for Self-Supervised Learning: A Clustering-Based Approach
Paper • 2405.15613 • Published • 17 -
A Touch, Vision, and Language Dataset for Multimodal Alignment
Paper • 2402.13232 • Published • 16 -
How Do Large Language Models Acquire Factual Knowledge During Pretraining?
Paper • 2406.11813 • Published • 31
-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 29 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 14 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 44 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 23
-
World Model on Million-Length Video And Language With RingAttention
Paper • 2402.08268 • Published • 40 -
Improving Text Embeddings with Large Language Models
Paper • 2401.00368 • Published • 82 -
Chain-of-Thought Reasoning Without Prompting
Paper • 2402.10200 • Published • 109 -
FiT: Flexible Vision Transformer for Diffusion Model
Paper • 2402.12376 • Published • 48