metadata
language:
- en
license: apache-2.0
tags:
- mistral
- causal-lm
- text-generation
- qlora
- merged-lora
- mathematics
- logic
- principia-mathematica
- research
pipeline_tag: text-generation
base_model: mistralai/Mistral-7B-v0.1
model_type: mistral
library_name: transformers
model_creator: clarkkitchen22
PrincipiaMistralModel7B
PrincipiaMistralModel7B is a 7B-parameter causal language model based on Mistral-7B-v0.1, fine-tuned via QLoRA on a custom corpus of logic- and math-focused text inspired by Principia Mathematica and related foundational material.
The goal of this model is to bias Mistral-7B toward:
- More formal reasoning about implications and basic proof structures
- Better familiarity with symbolic logic notation
- Explanations of classical foundations-of-mathematics ideas in clear English
This checkpoint is a fully merged model (LoRA merged into base), so it can be loaded directly with AutoModelForCausalLM without PEFT.
Model Details
- Base model:
mistralai/Mistral-7B-v0.1 - Architecture: Transformer (GQA + sliding window attention, as in Mistral-7B)
- Parameters: ~7B
- Library: Hugging Face
transformers - Finetuning method: QLoRA (low-rank adapters, later merged into full weights)
- Precision: Saved as
safetensorssharded across 3 files
Intended Use
Primary use cases
Educational / research exploration of:
- Basic propositional logic (e.g. implications, modus ponens, simple derivations)
- Foundations-of-mathematics style narratives (inspired by Principia Mathematica)
- Explanations of logic and proof ideas for students or hobbyists
As a component model inside agents/tools that:
- Need slightly more structured, formal reasoning than a generic base model
- Work with simple proof sketches, logical implications, or math-adjacent text
Not intended for
- High-stakes decision making (finance, medicine, law, safety-critical systems)
- Use as a fully robust automated theorem prover
- Use without human oversight in any domain that affects real people’s lives
Training & Data (High Level)
- Method: QLoRA fine-tuning on top of
mistralai/Mistral-7B-v0.1, then weights merged. - Hardware: Single consumer GPU (e.g., NVIDIA RTX 2070-class)
- Epochs: ~1 epoch over the custom dataset (light, targeted fine-tune)
- Data:
- Text inspired by Principia Mathematica–style logic and foundational mathematics
- Simple logical implication examples and step-by-step reasoning prompts
- Explanations of core foundational concepts in natural language
This is a research/learning project, not a benchmark-optimized or industrially aligned model.
How to Use
Basic loading (Transformers)
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "clarkkitchen22/PrincipiaMistralModel7B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto",
)
prompt = (
"We work in a simple propositional calculus.\n\n"
"Premises:\n"
" (1) p -> q\n"
" (2) q -> r\n"
"Conclusion:\n"
" (3) p -> r\n\n"
"Explain, step by step, why (3) follows from (1) and (2)."
)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=160,
do_sample=True,
top_p=0.9,
temperature=0.3,
repetition_penalty=1.15,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
---
license: apache-2.0
---