Retrieval Augmented Generation (RAG) is a popular technique to get LLMs to provide answers that are grounded in a data source. What do you do when your knowledge base includes images, like graphs or photos? By adding multimodal models into your RAG flow, you can get answers based off image sources, too!
Our most popular RAG solution accelerator, azure-search-openai-demo, now has an optional feature for RAG on image sources. In the example question below, the app answers a question that requires correctly interpreting a bar graph:
This blog post will walk through the changes we made to enable multimodal RAG, both so that developers using the solution accelerator can understand how it works, and so that developers using other RAG solutions can bring in multimodal support.
First let's talk about two essential ingredients: multimodal LLMs and multimodal embedding models.
Azure now offers multiple multimodal LLMs: gpt-4o and gpt-4o-mini, through the Azure OpenAI service, and Phi-3.5-vision-instruct, through the Azure AI Model Catalog. These models allow you to send in both images and text, and return text responses. (In the future, we may have LLMs that take audio input and return non-text inputs!)
For example, an API call to the gpt-4o model can contain a question along with an image URL:
{
"role": "user",
"content": [
{
"type": "text",
"text": "Whats in this image?"
},
{
"type": "image_url",
"image_url": { "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg" }
}
]
}
Those image URLs can be specified as full HTTP URLs, if the image happens to be available on the public web, or they can be specified as base-64 encoded Data URIs, which is particularly helpful for privately stored images.
For more examples working with gpt-4o, check out openai-chat-vision-quickstart, a repo which can deploy a simple Chat+Vision app to Azure, plus includes Jupyter notebooks showcasing scenarios.
Azure also offers a multimodal embedding API, as part of the Azure AI Vision APIs, that can compute embeddings in a multimodal space for both text and images. The API uses the state-of-the-art Florence model from Microsoft Research.
For example, this API call returns the embedding vector for an image:
curl.exe -v -X POST "https://<endpoint>/computervision/retrieval:vectorizeImage?api-version=2024-02-01-preview&model-version=2023-04-15" \
--data-ascii " { 'url':'https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png' }"
Once we have the ability to embed both images and text in the same embedding space, we can use vector search to find images that are similar to a user's query. For an example, check out this notebook that setups a basic multimodal search of images using Azure AI Search.
With those two multimodal models, we were able to give our RAG solution the ability to include image sources in both the retrieval and answering process.
At a high-level, we made the following changes:
Let's dive deeper into each of the changes above.
For our standard RAG on documents approach, we use an Azure AI search index that stores the following fields:
content
: The extracted text content from Azure Document Intelligence, which can process a wide range of files and can even OCR images inside files. sourcefile
: The filename of the document sourcepage
: The filename with page number, for more precise citationsembedding
: A vector field with 1536 dimensions, to store the embedding of the content field, computed using text-only OpenAI ada-002 model.For RAG on images, we add an additional field:
imageEmbedding
: A vector field with 1024 dimensions, to store the embedding of the image version of the document page, computed using the AI Vision vectorizeImage
API endpoint.
For our standard RAG approach, data ingestion involves these steps:
For RAG on images, we add two additional steps before indexing: uploading an image version of each document page to Blob Storage and computing multi-modal embeddings for each image.
The images are not just a direct copy of the document page. Instead, they contain the original document filename written in the top left corner of the image, like so:
This crucial step will enable the GPT vision model to later provide citations in its answers. From a technical perspective, we achieved this by first using the PyMuPDF Python package to convert documents to images, then using the Pillow Python package to add a top border to the image and write the filename there.
Now that our Blob storage container has citable images and our AI search index has multi-modal embeddings, users can start to ask questions about images.
Our RAG app has two primary question asking flows, one for "single-turn" questions, and the other for "multi-turn" questions which incorporates as much conversation history that can fit in the context window. To simplify this explanation, we'll focus on the single-turn flow.
Our single-turn RAG on documents flow looks like:
Our single-turn RAG on documents-plus-images flows looks like this:
The documents contain text, graphs, tables and images.
Each image source has the file name in the top left corner of the image with coordinates (10,10) pixels and is in the format SourceFileName:<file_name>
Each text source starts in a new line and has the file name followed by colon and the actual information. Always include the source name from the image or text for each fact you use in the response in the format: [filename]
Answer the following question using only the data provided in the sources below.
The text and image source can be the same file name, don't use the image title when citing the image source, only use the file name as mentioned.
Now, users can ask questions where the answers are entirely contained in the images and get correct answers! This can be a great fit for diagram-heavy domains, like finance.
We have seen some really exciting uses of this multimodal RAG approach, but there is much to explore to improve the experience.
More file types: Our repository only implements image generation for PDFs, but developers are now ingesting many more formats, both image files like PNG and JPEG as well as non-image files like HTML, docx, etc. We'd love help from the community in bringing support for multimodal RAG to more file formats.
More selective embeddings: Our ingestion flow uploads images for *every* PDF page, but many pages may be lacking in visual content, and that can negatively affect vector search results. For example, if your PDF contains completely blank pages, and the index stored the embeddings for those, we have found that vector searches often retrieve those blank pages. Perhaps in the multimodal space, "blankness" is considered similar to everything. We've considered approaches like using a vision model in the ingestion phase to decide whether an image is meaningful, or using that model to write a very descriptive caption for images instead of storing the image embeddings themselves.
Image extraction: Another approach would be to extract images from document pages, and store each image separately. That would be helpful for documents where the pages contain multiple distinct images with different purposes, since then the LLM would be able to focus more on only the most relevant image.
We would love your help in experimenting with RAG on images, sharing how it works for your domain, and suggesting what we can improve. Head over to our repo and follow the steps for deploying with the optional GPT vision feature enabled!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.