manueltonneau/spanish-hate-speech-superset
Viewer • Updated • 29.9k • 83 • 3
How to use citiusLTL/misoBETO with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("fill-mask", model="citiusLTL/misoBETO") # Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("citiusLTL/misoBETO")
model = AutoModelForMaskedLM.from_pretrained("citiusLTL/misoBETO")misoBETO is a domain adaptation of a Spanish BERT language model, specifically adapted to the misogyny domain.
It was adapted using a guided lexical masking strategy during masked language model (MLM) pretraining. Instead of randomly masking tokens, we prioritized masking words appearing in a misogyny-specific lexicon. The base corpus used for domain adaptation was the Spanish Hate Speech Superset.
For training the model we used a batch size of 8, with a learning rate of 2e-5. We trained the model for four epochs using a NVIDIA GeForce RTX 5090 GPU.
from transformers import pipeline
pipe = pipeline("fill-mask", model="citiusLTL/misoBETO")
text = pipe("Ella es una [MASK]")
print(text)
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("citiusLTL/misoBETO")
model = AutoModelForMaskedLM.from_pretrained("citiusLTL/misoBETO")
Base model
dccuchile/bert-base-spanish-wwm-uncased