Description
This repo contains bf16 files of Lonepino-11B. Just a normal model.
Model used
- Intel/neural-chat-7b-v3-3-Slerp
- NeverSleep/Noromaid-7b-v0.2
- chargoddard/loyal-piano-m7-cdpo
- maywell/PiVoT-0.1-Starling-LM-RP
The secret sauce
neural-maid-11B:
slices:
- sources:
- model: Intel/neural-chat-7b-v3-3-Slerp
layer_range: [0, 24]
- sources:
- model: NeverSleep/Noromaid-7b-v0.2
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
loyal-PiVoT-11B:
slices:
- sources:
- model: chargoddard/loyal-piano-m7-cdpo
layer_range: [0, 24]
- sources:
- model: maywell/PiVoT-0.1-Starling-LM-RP
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
Lonepino-11B:
slices:
- sources:
- model: "./neural-maid-11B"
layer_range: [0, 48]
- model: "./loyal-PiVoT-11B"
layer_range: [0, 48]
merge_method: slerp
base_model: "./neural-maid-11B"
parameters:
t:
- value: 0.4
dtype: bfloat16
Prompt template
Alpaca. Or chatml. Or any you like.
=w=
I use mergekit for all the manipulation told here.
Thanks to the Undi95 for the original 11B mistral merge recipe.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 70.10 |
| AI2 Reasoning Challenge (25-Shot) | 68.26 |
| HellaSwag (10-Shot) | 84.57 |
| MMLU (5-Shot) | 63.76 |
| TruthfulQA (0-shot) | 63.45 |
| Winogrande (5-shot) | 78.93 |
| GSM8k (5-shot) | 61.64 |
- Downloads last month
- 770
Model tree for beberik/Lonepino-11B
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard68.260
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard84.570
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard63.760
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard63.450
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard78.930
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard61.640