TinyStories: How Small Can Language Models Be and Still Speak Coherent English?
Paper
•
2305.07759
•
Published
•
38
This is an experimental model inspired by the paper https://arxiv.org/abs/2305.07759 - How Small Can Language Models Be and Still Speak Coherent English?.
Extended the same concept for Tamil. A 30M parameter LLaMA architecture model that outputs coherent Tamil is preseted here.
Additional experimentation which is included in the model:
For now, this is a toy model for researchers, students and LLM enthusiasts to play with the linquistic capability of the model.
We release the weights in two formats: Hugging Face transformers format and GGML format to use with CTransformers or LLaMA.cpp.
This is not fit for any practical purpose other than for research/experimentation use cases.
Usage:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("RajuKandasamy/tamillama_tiny_30m")
model = AutoModelForCausalLM.from_pretrained("RajuKandasamy/tamillama_tiny_30m")
prompt = f"""சொற்கள்:
வாக்குறுதி, எலி, பெரியது
சுருக்கம்:"""
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=256
)
print(tokenizer.decode(generation_output[0]))