Interplay-LM Context Pretrain Models

This repository is organized by context-mixture setting. Each top-level directory corresponds to one pretraining setting used in the context experiments.

Within each setting:

  • base/ stores the final pretraining checkpoint used to initialize RL.
  • rl/ stores the final RL checkpoints for each experiment variant.

Only inference-relevant Hugging Face files are included.

Included settings

  • idzoo_0.9zoo_0.1teacher
  • idzoo_0.99zoo_0.01teacher
  • idzoo_0.999zoo_0.001teacher

Load

from transformers import AutoModelForCausalLM, AutoTokenizer

repo_id = "Interplay-LM-Reasoning/context_pretrain"
subdir = "idzoo_0.99zoo_0.01teacher/rl/contextzoo_0.99zoo_0.01teacher_process_strict"

tokenizer = AutoTokenizer.from_pretrained(repo_id, subfolder=subdir)
model = AutoModelForCausalLM.from_pretrained(repo_id, subfolder=subdir)

Citation

@misc{zhang2025interplaypretrainingmidtrainingrl,
      title={On the Interplay of Pre-Training, Mid-Training, and RL on Reasoning Language Models},
      author={Charlie Zhang and Graham Neubig and Xiang Yue},
      year={2025},
      eprint={2512.07783},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2512.07783},
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for Interplay-LM-Reasoning/context_pretrain