Feature Extraction
sentence-transformers
PyTorch
Safetensors
Transformers
English
mistral
mteb
Eval Results (legacy)
Eval Results
text-embeddings-inference
Instructions to use intfloat/e5-mistral-7b-instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use intfloat/e5-mistral-7b-instruct with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("intfloat/e5-mistral-7b-instruct") sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] - Transformers
How to use intfloat/e5-mistral-7b-instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="intfloat/e5-mistral-7b-instruct")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("intfloat/e5-mistral-7b-instruct") model = AutoModel.from_pretrained("intfloat/e5-mistral-7b-instruct") - Inference
- Notebooks
- Google Colab
- Kaggle
Is there a way to do fine tuning using STS datasets?
#27
by ijkim - opened
Hi! there.
In the paper, Used NLI format dataset to calculate the loss of positive and hard negatives for query using infoNCE loss function.
Can you tell me how to train a dataset based on the similarity score of A and B sentences in STS format, which has a different data format from the above?
Can i disable negative mode, a parameter of the infoNCE function, and proceed?