Privacy & Data Storage
Privacy & Data Storage
Indexed is designed with privacy as a core principle. This page explains exactly where your data goes and what network calls are made.
Everything Runs Locally
The entire indexing pipeline — parsing, chunking, embedding, and searching — runs on your machine. There is no Indexed server, no cloud service, no API calls to third parties.
The Embedding Model
Indexed uses all-MiniLM-L6-v2, a Sentence Transformers model that runs locally via PyTorch. The model is downloaded once (about 80 MB) and cached on your machine. After that, no network calls are made during embedding or search.
Your documents are never sent to HuggingFace, OpenAI, or any external embedding API.
Search Is Local
When you run indexed index search or an AI assistant searches via MCP, the query is embedded locally using the same model, and FAISS performs the nearest-neighbor lookup on your local index. Nothing leaves your machine.
What Goes Over the Network
The only network calls Indexed makes are to your own infrastructure:
| When | What | Where |
|---|---|---|
| Indexing Jira | Fetches issues via Jira REST API | Your Jira instance |
| Indexing Confluence | Fetches pages via Confluence REST API | Your Confluence instance |
| First run | Downloads the embedding model | HuggingFace (once, ~80 MB) |
That's it. No telemetry, no analytics, no usage reporting, no phoning home.
Air-gapped environments
After the initial model download, Indexed works completely offline. For air-gapped environments, you can pre-download the model and point to it via configuration.
Where Data Is Stored
All Indexed data lives in a single directory:
~/.indexed/
├── config.toml # Your configuration
└── data/
└── collections/
├── my-docs/
│ ├── manifest.json # Collection metadata
│ ├── documents.json # Document metadata
│ ├── chunks.json # Text chunks
│ └── index.faiss # Vector index
└── eng-tickets/
├── manifest.json
├── documents.json
├── chunks.json
└── index.faissWhat each file contains
manifest.json— collection type, source path/URL, creation timestamp, query parameters. No document content.documents.json— metadata about each document (title, ID, source path). No full content.chunks.json— the actual text chunks that were embedded. This contains excerpts of your documents.index.faiss— the vector index. Contains numerical vectors, not readable text, but the vectors could theoretically be used to approximate the original text.
Protect your index directory
The chunks.json files contain actual text from your documents. Treat ~/.indexed/ with the same care as the source documents themselves.
Docker Considerations
When running Indexed in Docker, you typically mount ~/.indexed as a volume:
docker run -v ~/.indexed:/root/.indexed indexed mcp run --transport httpThis means your data stays on your host machine. The container doesn't store any persistent data itself.
If you're running the Docker container on a shared server, ensure the volume mount has appropriate filesystem permissions.
No Telemetry
Indexed does not collect any telemetry, usage data, or crash reports. There is no opt-in or opt-out because there is nothing to opt into.