-
Universal Language Model Fine-tuning for Text Classification
Paper • 1801.06146 • Published • 8 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 24 -
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 15 -
SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing
Paper • 1808.06226 • Published • 3
Collections
Discover the best community collections!
Collections including paper arxiv:2205.14135
-
LoRA: Low-Rank Adaptation of Large Language Models
Paper • 2106.09685 • Published • 56 -
Attention Is All You Need
Paper • 1706.03762 • Published • 106 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 64 -
Lost in the Middle: How Language Models Use Long Contexts
Paper • 2307.03172 • Published • 43
-
Detecting Pretraining Data from Large Language Models
Paper • 2310.16789 • Published • 11 -
Let's Synthesize Step by Step: Iterative Dataset Synthesis with Large Language Models by Extrapolating Errors from Small Models
Paper • 2310.13671 • Published • 19 -
AutoMix: Automatically Mixing Language Models
Paper • 2310.12963 • Published • 14 -
An Emulator for Fine-Tuning Large Language Models using Small Language Models
Paper • 2310.12962 • Published • 13
-
Attention Is All You Need
Paper • 1706.03762 • Published • 106 -
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
Paper • 2005.11401 • Published • 14 -
LoRA: Low-Rank Adaptation of Large Language Models
Paper • 2106.09685 • Published • 56 -
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 15
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper • 2306.01116 • Published • 41 -
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 15 -
RoFormer: Enhanced Transformer with Rotary Position Embedding
Paper • 2104.09864 • Published • 16 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 18
-
Attention Is All You Need
Paper • 1706.03762 • Published • 106 -
LoRA: Low-Rank Adaptation of Large Language Models
Paper • 2106.09685 • Published • 56 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 64 -
Lost in the Middle: How Language Models Use Long Contexts
Paper • 2307.03172 • Published • 43
-
Efficient Memory Management for Large Language Model Serving with PagedAttention
Paper • 2309.06180 • Published • 26 -
LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models
Paper • 2308.16137 • Published • 40 -
Scaling Transformer to 1M tokens and beyond with RMT
Paper • 2304.11062 • Published • 3 -
DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models
Paper • 2309.14509 • Published • 19
-
MiniGPT-v2: large language model as a unified interface for vision-language multi-task learning
Paper • 2310.09478 • Published • 21 -
Can GPT models be Financial Analysts? An Evaluation of ChatGPT and GPT-4 on mock CFA Exams
Paper • 2310.08678 • Published • 14 -
Llama 2: Open Foundation and Fine-Tuned Chat Models
Paper • 2307.09288 • Published • 248 -
LLaMA: Open and Efficient Foundation Language Models
Paper • 2302.13971 • Published • 20
-
Universal Language Model Fine-tuning for Text Classification
Paper • 1801.06146 • Published • 8 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 24 -
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 15 -
SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing
Paper • 1808.06226 • Published • 3
-
LoRA: Low-Rank Adaptation of Large Language Models
Paper • 2106.09685 • Published • 56 -
Attention Is All You Need
Paper • 1706.03762 • Published • 106 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 64 -
Lost in the Middle: How Language Models Use Long Contexts
Paper • 2307.03172 • Published • 43
-
Attention Is All You Need
Paper • 1706.03762 • Published • 106 -
LoRA: Low-Rank Adaptation of Large Language Models
Paper • 2106.09685 • Published • 56 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 64 -
Lost in the Middle: How Language Models Use Long Contexts
Paper • 2307.03172 • Published • 43
-
Detecting Pretraining Data from Large Language Models
Paper • 2310.16789 • Published • 11 -
Let's Synthesize Step by Step: Iterative Dataset Synthesis with Large Language Models by Extrapolating Errors from Small Models
Paper • 2310.13671 • Published • 19 -
AutoMix: Automatically Mixing Language Models
Paper • 2310.12963 • Published • 14 -
An Emulator for Fine-Tuning Large Language Models using Small Language Models
Paper • 2310.12962 • Published • 13
-
Efficient Memory Management for Large Language Model Serving with PagedAttention
Paper • 2309.06180 • Published • 26 -
LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models
Paper • 2308.16137 • Published • 40 -
Scaling Transformer to 1M tokens and beyond with RMT
Paper • 2304.11062 • Published • 3 -
DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models
Paper • 2309.14509 • Published • 19
-
Attention Is All You Need
Paper • 1706.03762 • Published • 106 -
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
Paper • 2005.11401 • Published • 14 -
LoRA: Low-Rank Adaptation of Large Language Models
Paper • 2106.09685 • Published • 56 -
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 15
-
MiniGPT-v2: large language model as a unified interface for vision-language multi-task learning
Paper • 2310.09478 • Published • 21 -
Can GPT models be Financial Analysts? An Evaluation of ChatGPT and GPT-4 on mock CFA Exams
Paper • 2310.08678 • Published • 14 -
Llama 2: Open Foundation and Fine-Tuned Chat Models
Paper • 2307.09288 • Published • 248 -
LLaMA: Open and Efficient Foundation Language Models
Paper • 2302.13971 • Published • 20
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper • 2306.01116 • Published • 41 -
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 15 -
RoFormer: Enhanced Transformer with Rotary Position Embedding
Paper • 2104.09864 • Published • 16 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 18