Vespa Voice Podcast
Vespa Voice: Conversations on AI Search
Vespa Voice brings you real-world insights from the frontlines of AI-powered search and recommendation. Hear from Vespa engineers, customers, and industry analysts as they discuss practical challenges, emerging trends, and best practices in large-scale retrieval, ranking, and relevance—whether you’re building RAG systems or scaling your infrastructure.

-
Episode 3: From Minutes to Milliseconds: Vinted’s Migration to Vespa Explained
In this episode, we sit down with Ernestas Poškus, Engineering Manager at Vinted, Europe’s largest platform for secondhand fashion. Ernestas shares how Vinted migrated from Elasticsearch to Vespa to overcome scalability challenges, reduce infrastructure costs, and improve both query and indexing latency—all while serving over a billion items.
-
Episode 2: The Future of RAG: Search, Vision Language Models, and Agentic Workflows
In this episode, we sit down with Jo Kristian Bergum to explore the paradigm shift in information retrieval—from keyword matching to hybrid search and now, to vision-language models (VLMs). Watch to learn how VLMs and methods using ColPali are revolutionizing retrieval pipelines by enabling multimodal search across complex formats like PDFs, scanned documents, and visual-heavy manuals. From IKEA assembly guides to enterprise knowledge bases, Jo Kristian explains how “what you see is what you search” is unlocking new levels of context, accuracy, and efficiency.
-
Episode 1: The Rise of Vector Databases – Breaking Down GigaOm’s Sonar Report
In this episode, we chat with Whit Walters, Field CTO at GigaOm, and dive into GigaOm’s latest Sonar Report on vector databases, exploring why Vespa.ai is recognized as both a Leader and Fast Mover in the space. We break down what sets Vespa apart—from its integrated architecture that runs data, indices, metadata, and machine learning inferences on the same physical nodes, to its unmatched performance at scale.