Ai Models
Collection
4 items
•
Updated
•
1
PrincipiaMistralModel7B is a 7B-parameter causal language model based on Mistral-7B-v0.1, fine-tuned via QLoRA on a custom corpus of logic- and math-focused text inspired by Principia Mathematica and related foundational material.
The goal of this model is to bias Mistral-7B toward:
This checkpoint is a fully merged model (LoRA merged into base), so it can be loaded directly with AutoModelForCausalLM without PEFT.
mistralai/Mistral-7B-v0.1 transformers safetensors sharded across 3 filesEducational / research exploration of:
As a component model inside agents/tools that:
mistralai/Mistral-7B-v0.1, then weights merged. This is a research/learning project, not a benchmark-optimized or industrially aligned model.
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "clarkkitchen22/PrincipiaMistralModel7B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto",
)
prompt = (
"We work in a simple propositional calculus.\n\n"
"Premises:\n"
" (1) p -> q\n"
" (2) q -> r\n"
"Conclusion:\n"
" (3) p -> r\n\n"
"Explain, step by step, why (3) follows from (1) and (2)."
)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=160,
do_sample=True,
top_p=0.9,
temperature=0.3,
repetition_penalty=1.15,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
---
license: apache-2.0
---