πŸ”– Protocol Highlight #1: The Librarian Method

Intent-Stated Tool Discovery via Embeddings

The Librarian Method solves the tool bloat problem that kills AI agent platforms at scale.


:bullseye: The Problem

You can’t fit 25,000+ tool definitions in a context window. Keyword filters guess wrong. Category menus break natural language.

The insight: Only the LLM knows what the user actually wants.


:light_bulb: The Invention

The agent requests tools after understanding intent β€” like asking a librarian for books.

User: "Email John about the report"
Agent β†’ request_tools("email sending")
Librarian β†’ returns email_send + email_read
Agent β†’ sends the email

The agent writes the search query. The Librarian finds semantically similar tools using embeddings.


:high_voltage: Key Features

Feature Description
Agent-stated intent The LLM writes the search query, not the user
Local embeddings Zero-cost via Ollama (nomic-embed-text)
Domain expansion Request β€œemail” β†’ get email siblings automatically
Scale-independent Works identically with 50 tools or 50,000

:money_bag: Cost Impact

  • Only needed tools enter the context window
  • ~230 KB memory footprint for tool embeddings
  • Zero training required
  • Works with any embedding model (OpenAI, Google, local)

:open_book: Read More

Full Documentation β†’

MIT Licensed β€” Use it, fork it, improve it.


Questions? Drop them below!