
Large language models are now evolving beyond their early unimodal days, when they could only process one type of data input. Nowadays, interest is shifting toward multimodal large language models (MLLMs), with reports suggesting that the multimodal AI market will grow by 35% annually to $4.5 billion by 2028.
Multimodal AI are systems that can simultaneously process multiple types of data — such as text, images and videos — in an integrated and contextual way.
MLLMs can be used to analyze a technical report with a combination of text, images, charts and numerical data, and then summarize it. Other potential uses include image-to-text and text-to-image search, visual question-answering (VQA), image segmentation and labeling, and for creating domain-specific AI systems and MLLM agents.
How Are MLLMs Designed?
While multimodal models can have a variety of architectures, most multimodal frameworks consist of these elements:
- Encoders: This component transforms different types of data into vector embeddings that can be read by a machine. Multimodal models typically have an encoder for each type of data, whether that’s image, text or audio.
- Fusion mechanism: This combines all the various modalities so that the model can understand the broader context.
- Decoders: Finally, there is a decoder that generates the output by parsing the feature vectors from the differing types of data.
Top Multimodal Models
1. CLIP
OpenAI‘s Contrastive Language-Image Pre-training (CLIP) is a multimodal vision-language model that handles image classification by linking descriptions from text-based data with corresponding images to output image labels.
It features a contrastive loss function that optimizes learning, a transformer-based text encoder, and a Vision Transformer (ViT) image encoder with zero-shot capability. CLIP could be used for a variety of tasks, like image annotation for training data, image retrieval, and generating captions from image inputs.
2. ImageBind
This multimodal model from Meta AI is capable of combining six different modalities, including text, audio, video, depth, thermal, and inertial measurement unit (IMU). It can generate output in any of these data types.
ImageBind pairs images data with other modalities in order to train the model, and uses InfoNCE for loss optimization. ImageBind could be used to create promotional videos with relevant audio, just by inputting a text prompt.
3. Flamingo
Offering users the possibility of few-shot learning, this vision-language model from DeepMind is able to process text, image and video inputs in order to produce text outputs.
It features a frozen, pre-trained Normalizer-Free ResNet for the vision encoder, a perceiver resampler that generates visual tokens, as well as cross-attention layers to fuse textual and visual features. Flamingo can be used for image captioning, classification and VQA.
4. GPT-4o
Also known as GPT-4 Omni, OpenAI released this multimodal generative pre-trained transformer-based model earlier this year.
GPT-4o is a high-performant system that is capable of taking text, audio, video and images as inputs, and can generate any of these data types as output with lightning speed, averaging 320 milliseconds in response time. It’s also a multilingual system that can understand over 50 languages. Interestingly, GPT-4o’s generated outputs can also be prompted to include more subtle parameters — like tone, rhythm and emotion — making it a powerful tool for creating convincing content.
5. Gen2
This impressive powerful text-to-video and image-to-video model by Runway leverages diffusion-based models can use text- and image-based prompts to produce context-aware videos.
Gen2 utilizes an autoencoder to map input video frames; as well as MiDaS, a machine learning model that estimates the depth of input video frames. It uses CLIP for encoding video frames to understand context. Finally, there’s a cross-modal attention mechanism to merge the content and structure representations distilled from MiDaS and CLIP. The system enables users to generate video clips using images and text prompts, which can be stylized to match an image.
6. Gemini
Google’s Gemini (formerly Bard) is a line of multimodal AI models capable of processing text, audio, video and images.
Gemini is available in three versions — Ultra, Pro and Nano — and features a transformer-based architecture. It has a larger context window, which allows it to process longer form data — whether that’s long videos, text or code — making it a powerful tool that can be used in a variety of different domains. To bolster safety and the quality of responses, Gemini utilizes supervised fine-tuning and reinforcement learning with human feedback (RLHF).
7. Claude 3
This vision-language model by Anthropic comes in three iterations: Haiku, Sonnet, and Opus. According to the company, Opus is the top variant and demonstrates state-of-the-art performance on a variety of benchmarks, including undergraduate knowledge and graduate-level expert reasoning, as well as basic mathematics. Anthropic claims it has near-human levels of comprehension and fluency on complex tasks.
Claude 3 features powerful recall capabilities, wherein it can process input sequences with more than 1 million tokens. When parsing research papers, it can understand photos, diagrams, charts and graphs in under three seconds, making it a powerful educational tool.
Conclusion
There’s a wealth of multimodal AI tools available out there, with most big tech companies offering some kind of MLLM nowadays. Nevertheless, these larger models might not be suitable for every situation — thus paving the way for smaller multimodal AI systems, which we will cover in an upcoming post.
The post Top 7 Tools for Building Multimodal AI Applications appeared first on The New Stack.
Multimodal AI systems can simultaneously process multiple types of data — like text, images and videos. Here are seven of our favorite tools.