By John Gruber
Jiiiii — Free to download, unlock your anime-watching-superpowers today!
Michael Nuñez, reporting for VentureBeat:
Apple researchers have developed new methods for training large language models on both text and images, enabling more powerful and flexible AI systems, in what could be a significant advance for artificial intelligence and for future Apple products.
The work, described in a research paper titled “MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training” that was quietly posted to arxiv.org this week, demonstrates how carefully combining different types of training data and model architectures can lead to state-of-the-art performance on a range of AI benchmarks.
“We demonstrate that for large-scale multimodal pre-training using a careful mix of image-caption, interleaved image-text, and text-only data is crucial for achieving state-of-the-art few-shot results across multiple benchmarks,” the researchers explain. By training models on a diverse dataset spanning visual and linguistic information, the MM1 models were able to excel at tasks like image captioning, visual question answering, and natural language inference.
Summary thread on Twitter/X from team member Brandon McKinzie, Hacker News thread, and roundup of commentary from Techmeme. The consensus is that this paper is remarkably open with technical details.
★ Monday, 18 March 2024