A founder we work with had been stuck on the same problem for two months. Their RAG retrieval recall was sitting at 58%. They had tried OpenAI's embedding-3-small, then embedding-3-large, then BGE-M3, then Voyage. Each swap added a couple of points, then the curve flattened. The team was about to start fine-tuning their own embedding model.
We told them to stop and add a reranker first. The number went from 58% to 81% in a single afternoon. The fine-tuning project was cancelled.
This is the moment most teams discover that the bottleneck was never the embedding model. It was the architecture choice of using a single embedding per chunk to begin with. Late interaction is the family of techniques that fixes it, and it is the one most teams skip because the name sounds intimidating.
What a single embedding per chunk loses
A bi-encoder (which is what every standard embedding model is) takes a chunk of text, compresses it into a single fixed-length vector, and stores it. At query time, the user's question is also compressed into a single vector, and similarity is computed between the two.
The compression is the problem. A 500-token chunk that mentions five different concepts gets averaged into one vector. The vector represents the chunk roughly, but it loses the distinction between "this chunk is mostly about X with a brief mention of Y" and "this chunk is mostly about Y with a brief mention of X." When the user query is about Y, both chunks look equally relevant by cosine distance, even though one is the right answer and the other is noise.
This is why every benchmark of "best embedding model" shows diminishing returns past a certain point. The embedding model is doing the best it can with the information bottleneck of a single vector. The architecture is the limit.
How late interaction works
ColBERT (the original, 2020) keeps the per-token embeddings instead of pooling them into one vector. A 500-token chunk becomes 500 vectors. A 10-token query becomes 10 vectors. At scoring time, you compute the maximum similarity between each query token and any chunk token, then sum those max scores into the final relevance score.
The math is the same dot products you would do for any vector search. The difference is that "how well does this query match this chunk" is now a sum of "for each query token, what is the best matching chunk token," which preserves the fine-grained signal that pooling threw away.
In practice this looks like:
- Query "GPT-4o pricing breakdown" tokenizes to roughly 4 tokens.
- Each token finds its best match in the candidate chunk.
- "GPT-4o" matches the "GPT-4o" token in the chunk strongly.
- "pricing" matches "cost" or "price" or "pricing" in the chunk.
- "breakdown" matches "table" or "structure" or "breakdown."
- The summed max scores give a relevance number that reflects all four query terms, not just the average semantic similarity.
This is what catches the recall ceiling.
Two patterns we ship
Sapota uses ColBERT in two patterns depending on the corpus size and latency budget.
Pattern 1: ColBERT as a reranker over bi-encoder retrieval. The first stage is a standard bi-encoder vector search returning the top 50 candidates. The second stage is ColBERT reranking those 50 down to the top 5. This is the pattern we use for most production deployments. The first stage is fast (millisecond range, scales to billions of vectors). The second stage is slow but only runs on 50 candidates, not the full index.
Pattern 2: ColBERT as the only retriever. For corpora under a few million chunks, ColBERT can be the primary retriever using PLAID or similar index structures that make late-interaction search tractable at scale. Latency is higher than a bi-encoder (10x to 50x depending on index size), but recall is the highest of any retrieval method we have benchmarked.
We default to Pattern 1 unless the corpus is small enough that Pattern 2 is feasible and the recall lift justifies it.
ColPali: the same trick for documents
ColPali extends the late-interaction idea to entire document pages treated as images. Instead of token-level embeddings of text, it uses patch-level embeddings of an image of the page (each page split into a 32x32 grid of patches). Query tokens match against image patches using the same MaxSim mechanism.
The implications:
- The OCR step disappears. The model sees the page as a vision-language model would.
- Layout, charts, tables, and figures are all preserved as part of the same embedding space.
- Cross-modal queries (text query against visual content) work natively.
The cost is storage (1024 vectors per page vs 1 vector per chunk) and indexing speed (vision encoder inference is GPU-bound). Binary quantization brings the storage cost down by 32x and the latency down by an order of magnitude, which is what makes ColPali production-viable.
For document-heavy corpora (research papers, financial filings, slide decks, regulatory submissions), ColPali outperforms both bi-encoder text RAG and CLIP-based multimodal RAG on published benchmarks. We use it when the corpus is genuinely visual and the budget supports the storage and GPU inference cost.
The cost conversation
Late interaction is not free. The honest trade-off:
- Storage. ColBERT chunks store roughly 100x more vectors than bi-encoder chunks (one per token vs one per chunk). ColPali stores 1024 vectors per page. Plan for this in vector DB sizing.
- Index time. Building the index takes longer because there are more vectors to compute. Not catastrophic, usually a few hours for a million-chunk corpus on a single GPU.
- Query time. A reranker pattern adds 50ms to 200ms to the p50 latency, depending on how many candidates the reranker consumes. A pure-ColBERT setup adds more.
- Operational complexity. Vector databases that support multi-vector with MaxSim comparators are a smaller set than the ones that support standard cosine search. Qdrant supports it natively. Most others do not.
The number to weigh against this is the recall lift. In every audit Sapota has run where the team's recall plateaued in the 50% to 70% range, adding late-interaction reranking pushed it into the 80% to 90% range. That delta is the difference between "the AI is unreliable" and "the AI is the best search interface we have."
When NOT to add late interaction
A reranker is not always the answer. Skip it when:
- Recall is already above 90% (you are past the useful range; further gains come from prompt or generation work).
- The corpus is small enough (under 50,000 chunks) that a cross-encoder reranker (BGE-reranker, Jina-reranker) gets you most of the same lift with less infrastructure.
- The latency budget is hard-sub-100ms (a real-time conversational interface where the reranker overhead breaks the UX).
- The team does not have a vector DB that supports multi-vector with MaxSim, and migrating is not on the roadmap.
For most production RAG systems sitting at recall in the 60% to 75% range, late interaction is the next move. Cross-encoder rerankers are the lighter alternative if the team is not ready for the full ColBERT setup.
What changed for the founder
The fix took half a day. We added a Jina-reranker stage between their existing Qdrant retrieval and the LLM call. Recall jumped from 58% to 81%. Faithfulness (because the LLM was now seeing better context) went from 0.79 to 0.93. The fine-tuning project was cancelled the same week.
The next conversation is whether to upgrade the cross-encoder to a full ColBERT setup, which would push recall another 4 to 6 points based on what we have seen on similar corpora. For their current scale and budget, the cross-encoder is the right floor. Full ColBERT is the v2.
If your retrieval has plateaued
If your team has been swapping embedding models and watching the recall curve flatten, the bottleneck is almost certainly the architecture, not the model. Sapota runs a one-week reranker integration engagement that adds the cross-encoder or ColBERT stage as a working PR plus a side-by-side eval against the current setup.
Reach out via the AI engineering page with the recall numbers you are seeing and the embedding models you have already tried. The diagnosis is usually the same conversation.