Embedding Models
Embedding models convert text, code, and other content into dense numerical vectors that capture semantic meaning, enabling similarity-based search and retrieval across agent memory systems. These vectors power retrieval-augmented generation (RAG) pipelines, semantic code search, and long-term memory retrieval by letting agents find conceptually similar content rather than relying on exact keyword matching. Embedding model choice breaks retrieval even when your vector database, chunking strategy, and query logic are all correct: a general-purpose model trained on web text has no representation for identifiers like `ctx.WithDeadline` or `BATCH_FLUSH_INTERVAL` as meaningful concepts, so queries using your codebase's own vocabulary return near-random neighbors, and the agent silently retrieves the wrong context on every call.