Vespa at Work:
Clarm: Agentic AI-powered Sales for Developers with Vespa Cloud
By treating retrieval as a first-class system, built on Vespa Cloud, Clarm unified text search, vector similarity, structured filtering, and ranking into a single production-grade platform, eliminating the fragility and guesswork common in vector-only stacks.
At a Glance
- Agentic AI sales agents grounded in real GitHub data
- < 1 day to production on Vespa Cloud
- Millions of developer signals indexed and ranked
- Multi-thousand query sessions, with sessions lasting up to 22 hours
Introduction
Clarm accelerates revenue growth for open source software (OSS) companies by transforming signals from GitHub and developer communities into actionable leads, automated content, and AI-driven developer support. Founded to solve a pervasive problem in the OSS ecosystem, Clarm’s platform empowers founders and teams to focus on building products while AI manages growth and engagement workflows.
“Most OSS founders can’t get attention for their software initially. They’re so focused on building the product that marketing, SEO, and content creation get dropped. We built Clarm to automate all the growth work founders drop so they can focus on git commits,” explains Marcus, founder and CEO of Clarm.
The Challenge: Reliable Retrieval at Scale
Open source companies often have thriving communities with thousands of stars, forks, and issues on GitHub, but lack the infrastructure to turn signals into predictable revenue. Clarm’s product relies on hybrid data sources spanning commits, website events, interactions, and developer sentiment. To power agentic AI workflows where multi-step agents reason, act, and respond autonomously, the retrieval layer must be accurate, explainable, and real-time.
Key technical requirements included:
- Unified hybrid retrieval: Text, vector embeddings, and structured filters in a single pipeline.
- Grounded AI responses: Zero-hallucination answers with verifiable context.
- Scalability: Handle millions of GitHub data points and enrichment signals at low latency.
- Cost effectiveness: Startup-friendly pricing and resource usage.
Traditional search engines and vector databases fell short of these combined needs, lacking real-time ranking, hybrid filtering, or scalable production capabilities without stitching multiple systems together.
The Solution: Vespa’s Unified Search Architecture
Clarm selected Vespa Cloud as the backbone of its search and retrieval stack thanks to its production-grade, unified approach to text search, vector similarity, and structured filtering. Unlike siloed architectures, Vespa delivers all retrieval and ranking logic within one platform, dramatically simplifying development and operations.
Deployment Highlights
Rapid Time to Production
With Docker-based local development and Vespa Cloud provisioning, Clarm moved from initial prototype to production in less than a day.
Unified Retrieval Pipeline
A single query endpoint handles lexical search, semantic embeddings, and filtered ranking, which eliminated the need for multiple separate services.
Native Ranking and Tensor Support
Built-in support for machine-learned ranking and tensor operations enabled sophisticated lead scoring without custom layers.
Real-Time Indexing
GitHub events and enrichment signals are indexed instantly, keeping lead intelligence up-to-date and actionable.
Scalable Cloud Infrastructure
Automatic scaling and high availability from Vespa Cloud freed Clarm’s engineering team to focus on product innovation instead of infrastructure management.
Results: Better Leads, Faster AI
Clarm’s integration with Vespa Cloud delivered tangible business impact early in its lifecycle:
- <1 Day to Production: Reduced infrastructure ramp-up from weeks to hours.
- Zero-Hallucination AI: Retrieval grounded in actual data, eliminating unreliable AI responses.
- High-Quality Lead Intelligence: Sophisticated ranking of GitHub signals for companies with tens of thousands of stars.
- Exceptional Support and Developer Experience: Easy schema design, local prototyping, and rapid iteration.
Clarm’s customers, from growing OSS projects to established developer tools platforms, are now converting community engagement signals into real business outcomes, with engagement depth and lead accuracy improving markedly.
What’s Next
Clarm is focused on expanding the reach and depth of its AI-powered growth infrastructure.
By proving that reliable retrieval is the foundation for trustworthy AI, Clarm is charting a new category in developer-centric sales infrastructure, helping software teams convert stars into sustainable revenue streams.
More Reading
Autoscaling with Vespa
This eBook explores how Vespa’s advanced autoscaling capabilities help organizations efficiently manage variable workloads by automatically adjusting resources to meet performance, cost, and scalability requirements.
Migrating from Elasticsearch to Vespa
In this webinar, guest speaker Ravindra Harige, long-time search expert and founder of Searchplex, will share how to confidently make the move from Elasticsearch to Vespa and scale with confidence.