Instructions to use InvokeAI/ip_adapter_sd_image_encoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use InvokeAI/ip_adapter_sd_image_encoder with Transformers:
# Load model directly from transformers import AutoTokenizer, CLIPVisionModelWithProjection tokenizer = AutoTokenizer.from_pretrained("InvokeAI/ip_adapter_sd_image_encoder") model = CLIPVisionModelWithProjection.from_pretrained("InvokeAI/ip_adapter_sd_image_encoder") - Notebooks
- Google Colab
- Kaggle
This is the Image Encoder required for SD1.5 IP Adapter model to function correctly. It is compatible with version 3.2+ of Invoke AI.
IP Adapter allows for users to input an Image Prompt, which is interpreted by the system, and passed in as conditioning for the image generation process.
The Community Edition of Invoke AI can be found at invoke.ai or on GitHub at https://github.com/invoke-ai/InvokeAI
Note: This model is a copy of https://huggingface.co/h94/IP-Adapter/tree/5c2eae7d8a9c3365ba4745f16b94eb0293e319d3/models/image_encoder. It is hosted here in a format compatibile with InvokeAI.
- Downloads last month
- 7,694
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support