-
-
Notifications
You must be signed in to change notification settings - Fork 3
Expand file tree
/
Copy pathmem0-aio.xml
More file actions
165 lines (158 loc) · 21 KB
/
mem0-aio.xml
File metadata and controls
165 lines (158 loc) · 21 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
<?xml version="1.0"?>
<Container version="2">
<Name>mem0-aio</Name>
<Repository>jsonbored/mem0-aio:latest</Repository>
<Registry>https://hub.docker.com/r/jsonbored/mem0-aio</Registry>
<Network>bridge</Network>
<MyIP/>
<Shell>sh</Shell>
<Privileged>false</Privileged>
<Support>https://github.com/JSONbored/mem0-aio/issues</Support>
<Project>https://github.com/JSONbored/mem0-aio</Project>
<Overview>Mem0 OpenMemory gives LLM apps a persistent memory layer with a web UI, MCP/API server, and pluggable vector backends.

[b]All-In-One Unraid Edition[/b]
`mem0-aio` packages the OpenMemory UI, API, and embedded Qdrant into one Unraid-first container so beginners can get a working first boot without wiring a separate vector database.

[b]Quick Install (Beginners)[/b]
1. Install the template and leave the default appdata path in place.
2. For the fastest hosted path, set [code]OPENAI_API_KEY[/code].
3. For the normal local-LLM homelab path, set [code]OLLAMA_BASE_URL[/code] to your external Ollama root URL and set the Ollama chat/embed models you actually have pulled.
4. Start the container and open the Web UI. The wrapper will auto-default to Ollama when [code]OLLAMA_BASE_URL[/code] is set and no explicit provider overrides are supplied.
5. Only expose the direct API port if you need external MCP/API clients.

[b]Advanced View[/b]
- Leave storage defaults in place for the bundled SQLite + Qdrant path.
- If you use external vector storage, configure exactly one backend. The wrapper rejects competing selectors such as Redis plus PGVector plus external Qdrant because OpenMemory can only initialize one vector store at a time.
- Provider overrides are available for native Ollama, OpenAI-compatible endpoints, hosted model providers, and supported external vector stores, but only set the fields your chosen path needs.

[b]Important Notes[/b]
- This wrapper is meant to simplify first boot on Unraid, not remove the real complexity of model/provider credentials and external-service tuning.
- Actual memory generation still requires a valid LLM/embedder configuration. Leaving [code]OPENAI_API_KEY[/code] blank is fine if you are using Ollama, but you still need to provide a reachable Ollama endpoint and valid model names.
- Native Ollama uses the root API URL such as [code]http://host.docker.internal:11434[/code], not an OpenAI-compatible [code]/v1[/code] path. If your reverse proxy adds auth and only exposes an OpenAI-style [code]/v1[/code] endpoint, use the advanced OpenAI-compatible base URL fields instead of the native Ollama provider path.
- The embedded Qdrant service is intentionally bundled for the beginner AIO path; it is skipped only when one valid external vector backend is configured.</Overview>
<Changes>### 2026-05-05
- Generated from CHANGELOG.md during release preparation. Do not edit manually.
- Harden apt package installs
- Use shared AIO build workflow
- Centralize release workflows
- Repin shared workflow ref
- Centralize workflow drift checks
- Repin caller workflows
- Pin catalog asset manifest
- Pin shared validation policy
- Use shared AIO workflows
- Sync workflow path filters
- Sync catalog publication state
- Pin publish helper workflow fix
- Pin next-wave aio-fleet workflows
- Pin Docker Hub primary workflow
- Pin control-plane workflow foundation
- Document central app test dependencies
- Expose manual publish targets
- Align mem0 icon sync target
- Sync shared validation and trunk cleanup
- Sync release shim path fallback
- Preserve inherited apt source scheme
- Sync shared repository boilerplate
- Move shared automation to aio-fleet
- Declare aio-fleet ownership
- Bump mem0 to v2.0.1
- Use shared derived repo validation
- Use shared release helper shim
- Remove legacy shared contract tests
- Repin workflow expectation
- Run shared metadata validation</Changes>
<Category>AI: Productivity: Tools:Utilities</Category>
<WebUI>http://[IP]:[PORT:3000]</WebUI>
<TemplateURL>https://raw.githubusercontent.com/JSONbored/awesome-unraid/main/mem0-aio.xml</TemplateURL>
<Icon>https://raw.githubusercontent.com/JSONbored/awesome-unraid/main/icons/mem0.jpeg</Icon>
<ExtraSearchTerms>ai memory mcp llm openmemory qdrant ollama anthropic groq deepseek embeddings vector database</ExtraSearchTerms>
<Requires>OpenMemory still needs a valid LLM/embedder setup for actual memory generation. For the easiest first boot, keep bundled storage defaults and either set OPENAI_API_KEY or point the app at a reachable native Ollama endpoint with chat and embedding models you already have pulled. External vector storage is advanced: configure exactly one backend.</Requires>
<ExtraParams/>
<PostArgs/>
<CPUset/>
<DateInstalled/>
<DonateText>Support JSONbored on GitHub Sponsors.</DonateText>
<DonateLink>https://github.com/sponsors/JSONbored</DonateLink>
<Description/>
<Networking>
<Mode>bridge</Mode>
<Publish>
<Port>
<HostPort>3000</HostPort>
<ContainerPort>3000</ContainerPort>
<Protocol>tcp</Protocol>
</Port>
<Port>
<HostPort>8765</HostPort>
<ContainerPort>8765</ContainerPort>
<Protocol>tcp</Protocol>
</Port>
</Publish>
</Networking>
<Data>
<Volume>
<HostDir>/mnt/user/appdata/mem0-aio/storage</HostDir>
<ContainerDir>/mem0/storage</ContainerDir>
<Mode>rw</Mode>
</Volume>
</Data>
<Config Name="Web UI Port" Target="3000" Default="3000" Mode="tcp" Description="Main OpenMemory web interface port." Type="Port" Display="always" Required="true" Mask="false">3000</Config>
<Config Name="API / MCP Port" Target="8765" Default="8765" Mode="tcp" Description="Direct API and MCP port. The UI works without exposing this externally, but external MCP clients may need it." Type="Port" Display="advanced" Required="false" Mask="false">8765</Config>
<Config Name="AppData - OpenMemory Storage" Target="/mem0/storage" Default="/mnt/user/appdata/mem0-aio/storage" Mode="rw" Description="Persistent storage for the SQLite database, embedded Qdrant data, and AIO state." Type="Path" Display="always" Required="true" Mask="false">/mnt/user/appdata/mem0-aio/storage</Config>
<Config Name="OpenAI API Key" Target="OPENAI_API_KEY" Default="" Mode="" Description="Optional hosted-provider quick start. Leave blank if you plan to use Ollama instead." Type="Variable" Display="always" Required="false" Mask="true"/>
<Config Name="Default User ID" Target="USER" Default="default_user" Mode="" Description="Default user namespace used by the API and MCP server." Type="Variable" Display="always" Required="false" Mask="false">default_user</Config>
<Config Name="Ollama Base URL" Target="OLLAMA_BASE_URL" Default="" Mode="" Description="Native external Ollama root URL. Example: http://host.docker.internal:11434 or http://192.168.1.10:11434. When set, the wrapper auto-defaults to Ollama if no explicit provider override is supplied." Type="Variable" Display="always" Required="false" Mask="false"/>
<Config Name="Ollama Chat Model" Target="LLM_MODEL" Default="" Mode="" Description="Optional Ollama chat model override. Set this to a model you already have pulled on your Ollama server. Example: llama3.1:latest or mistral:7b" Type="Variable" Display="always" Required="false" Mask="false"/>
<Config Name="Ollama Embed Model" Target="EMBEDDER_MODEL" Default="" Mode="" Description="Optional Ollama embedding model override. Set this to a model you already have pulled on your Ollama server. Example: nomic-embed-text" Type="Variable" Display="always" Required="false" Mask="false"/>
<Config Name="[UI] Browser API URL" Target="NEXT_PUBLIC_API_URL" Default="/openmemory-api" Mode="" Description="Default same-origin proxy path for the web UI. Leave this alone unless you intentionally want the browser to call a different API URL directly." Type="Variable" Display="advanced" Required="false" Mask="false">/openmemory-api</Config>
<Config Name="[UI] Public User ID" Target="NEXT_PUBLIC_USER_ID" Default="default_user" Mode="" Description="Default user ID shown in the browser client. Usually keep this aligned with USER." Type="Variable" Display="advanced" Required="false" Mask="false">default_user</Config>
<Config Name="[LLM] Provider" Target="LLM_PROVIDER" Default="auto|openai|anthropic|azure_openai|ollama|together|groq|litellm|mistralai|google_ai|aws_bedrock|gemini|deepseek|xai|lmstudio|langchain" Mode="" Description="Advanced provider override. Leave this on auto for the normal wrapper behavior: OpenAI when no Ollama URL is set, or Ollama when OLLAMA_BASE_URL is set." Type="Variable" Display="advanced" Required="false" Mask="false">auto</Config>
<Config Name="[LLM] API Key" Target="LLM_API_KEY" Default="" Mode="" Description="Optional provider-specific API key if you are not using OPENAI_API_KEY or if you want the LLM to use a different secret." Type="Variable" Display="advanced" Required="false" Mask="true"/>
<Config Name="[LLM] Base URL" Target="LLM_BASE_URL" Default="" Mode="" Description="Optional custom LLM API base URL for OpenAI-compatible or provider-specific endpoints. Use this for auth-protected OpenAI-style proxies, usually ending in /v1." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[LLM] Ollama Host Override" Target="OLLAMA_HOST" Default="" Mode="" Description="Optional host override used by upstream Docker Ollama auto-detection. Example: host.docker.internal" Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Embedder] Provider" Target="EMBEDDER_PROVIDER" Default="auto|openai|azure_openai|ollama|huggingface|vertexai|gemini|lmstudio|together|langchain|aws_bedrock" Mode="" Description="Advanced embedder provider override. Leave this on auto for the normal wrapper behavior so the embedder follows the matching OpenAI or Ollama path." Type="Variable" Display="advanced" Required="false" Mask="false">auto</Config>
<Config Name="[Embedder] API Key" Target="EMBEDDER_API_KEY" Default="" Mode="" Description="Optional provider-specific embedder API key." Type="Variable" Display="advanced" Required="false" Mask="true"/>
<Config Name="[Embedder] Base URL" Target="EMBEDDER_BASE_URL" Default="" Mode="" Description="Optional custom embedder API base URL. Use this for OpenAI-compatible or separate embedding endpoints, including auth-protected proxies that expose /v1." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Embedder] Dimensions Override" Target="EMBEDDER_DIMENSIONS" Default="" Mode="" Description="Optional explicit embedding dimension override for vector-store setup. Usually leave blank and let the wrapper auto-detect it, but set this if your embedder is custom or auto-detection cannot determine the correct size." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Storage] DATABASE_URL" Target="DATABASE_URL" Default="sqlite:////mem0/storage/openmemory.db" Mode="" Description="Advanced override for the API database connection. Leave the default SQLite path for normal AIO usage." Type="Variable" Display="advanced" Required="false" Mask="false">sqlite:////mem0/storage/openmemory.db</Config>
<Config Name="[Legacy] API_KEY Alias" Target="API_KEY" Default="" Mode="" Description="Legacy upstream alias used by older OpenMemory examples that referenced env:API_KEY. Leave blank unless you intentionally depend on that older pattern." Type="Variable" Display="advanced" Required="false" Mask="true"/>
<Config Name="[Vector Store:Qdrant] URL" Target="QDRANT_URL" Default="" Mode="" Description="External Qdrant URL, for example http://qdrant:6333 or https://qdrant.example.com. Use this instead of external QDRANT_HOST when auth or HTTPS is involved. Do not combine with Redis, PGVector, or other vector backends." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Vector Store:Qdrant] Host" Target="QDRANT_HOST" Default="127.0.0.1" Mode="" Description="Bundled Qdrant host by default. Leave as 127.0.0.1 for the AIO path; set to an external host only when Qdrant is the one external vector backend you want." Type="Variable" Display="advanced" Required="false" Mask="false">127.0.0.1</Config>
<Config Name="[Vector Store:Qdrant] Port" Target="QDRANT_PORT" Default="6333" Mode="" Description="Qdrant port for the bundled or selected external Qdrant backend." Type="Variable" Display="advanced" Required="false" Mask="false">6333</Config>
<Config Name="[Vector Store:Qdrant] API Key" Target="QDRANT_API_KEY" Default="" Mode="" Description="API key for authenticated external Qdrant. Requires QDRANT_URL or an external QDRANT_HOST; the bundled Qdrant service is not started with API-key auth." Type="Variable" Display="advanced" Required="false" Mask="true"/>
<Config Name="[Vector Store:Qdrant] Disable Telemetry" Target="QDRANT__TELEMETRY_DISABLED" Default="true|false" Mode="" Description="Controls Qdrant usage-statistics reporting for the bundled embedded vector store. Default is true so the AIO image stays privacy-first by default." Type="Variable" Display="advanced" Required="false" Mask="false">true</Config>
<Config Name="[Vector Store:Chroma] Host" Target="CHROMA_HOST" Default="" Mode="" Description="External Chroma host. Requires CHROMA_PORT and must be the only external vector backend configured." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Vector Store:Chroma] Port" Target="CHROMA_PORT" Default="" Mode="" Description="External Chroma port. Requires CHROMA_HOST." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Vector Store:Weaviate] Cluster URL" Target="WEAVIATE_CLUSTER_URL" Default="" Mode="" Description="External Weaviate cluster URL. Use this or WEAVIATE_HOST/WEAVIATE_PORT, and do not combine with another vector backend." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Vector Store:Weaviate] Host" Target="WEAVIATE_HOST" Default="" Mode="" Description="External Weaviate host. Requires WEAVIATE_PORT unless WEAVIATE_CLUSTER_URL is set." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Vector Store:Weaviate] Port" Target="WEAVIATE_PORT" Default="" Mode="" Description="External Weaviate port. Requires WEAVIATE_HOST unless WEAVIATE_CLUSTER_URL is set." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Vector Store:Redis] URL" Target="REDIS_URL" Default="" Mode="" Description="Redis-backed vector-store URL. Example: redis://:password@host:6379/0. This selects Redis as the vector backend; do not combine with PGVector or Qdrant." Type="Variable" Display="advanced" Required="false" Mask="true"/>
<Config Name="[Vector Store:PGVector] Host" Target="PG_HOST" Default="" Mode="" Description="External PostgreSQL/pgvector host. Requires PG_PORT and must be the only external vector backend configured." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Vector Store:PGVector] Port" Target="PG_PORT" Default="" Mode="" Description="External PostgreSQL/pgvector port. Required when any PG_* vector-store setting is used." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Vector Store:PGVector] Database" Target="PG_DB" Default="" Mode="" Description="External PostgreSQL database name for pgvector. Defaults to mem0 when PGVector is selected and this is blank." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Vector Store:PGVector] User" Target="PG_USER" Default="" Mode="" Description="External PostgreSQL username for pgvector. Defaults to mem0 when PGVector is selected and this is blank." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Vector Store:PGVector] Password" Target="PG_PASSWORD" Default="" Mode="" Description="External PostgreSQL password for pgvector. Defaults to mem0 when PGVector is selected and this is blank." Type="Variable" Display="advanced" Required="false" Mask="true"/>
<Config Name="[Vector Store:Milvus] Host" Target="MILVUS_HOST" Default="" Mode="" Description="Optional external Milvus host." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Vector Store:Milvus] Port" Target="MILVUS_PORT" Default="" Mode="" Description="Optional external Milvus port." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Vector Store:Milvus] Token" Target="MILVUS_TOKEN" Default="" Mode="" Description="Optional Milvus token for authenticated deployments." Type="Variable" Display="advanced" Required="false" Mask="true"/>
<Config Name="[Vector Store:Milvus] Database Name" Target="MILVUS_DB_NAME" Default="" Mode="" Description="Optional Milvus database name." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Vector Store:Elasticsearch] Host" Target="ELASTICSEARCH_HOST" Default="" Mode="" Description="Optional external Elasticsearch host." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Vector Store:Elasticsearch] Port" Target="ELASTICSEARCH_PORT" Default="" Mode="" Description="Optional external Elasticsearch port." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Vector Store:Elasticsearch] User" Target="ELASTICSEARCH_USER" Default="" Mode="" Description="Optional Elasticsearch username." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Vector Store:Elasticsearch] Password" Target="ELASTICSEARCH_PASSWORD" Default="" Mode="" Description="Optional Elasticsearch password." Type="Variable" Display="advanced" Required="false" Mask="true"/>
<Config Name="[Vector Store:Elasticsearch] Use SSL" Target="ELASTICSEARCH_USE_SSL" Default="true|false" Mode="" Description="Whether the Elasticsearch HTTP endpoint uses HTTPS. Default is true because modern Elasticsearch containers usually expose HTTPS." Type="Variable" Display="advanced" Required="false" Mask="false">true</Config>
<Config Name="[Vector Store:Elasticsearch] Verify Certs" Target="ELASTICSEARCH_VERIFY_CERTS" Default="true|false" Mode="" Description="Whether to verify Elasticsearch TLS certificates. Default is false because many self-hosted homelab deployments use self-signed certs." Type="Variable" Display="advanced" Required="false" Mask="false">false</Config>
<Config Name="[Vector Store:OpenSearch] Host" Target="OPENSEARCH_HOST" Default="" Mode="" Description="Optional external OpenSearch host." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Vector Store:OpenSearch] Port" Target="OPENSEARCH_PORT" Default="" Mode="" Description="Optional external OpenSearch port." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Vector Store:OpenSearch] User" Target="OPENSEARCH_USER" Default="" Mode="" Description="Optional OpenSearch username." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Vector Store:OpenSearch] Password" Target="OPENSEARCH_PASSWORD" Default="" Mode="" Description="Optional OpenSearch password." Type="Variable" Display="advanced" Required="false" Mask="true"/>
<Config Name="[Vector Store:OpenSearch] Use SSL" Target="OPENSEARCH_USE_SSL" Default="true|false" Mode="" Description="Whether the OpenSearch HTTP endpoint uses HTTPS. Default is true because modern OpenSearch containers usually expose HTTPS." Type="Variable" Display="advanced" Required="false" Mask="false">true</Config>
<Config Name="[Vector Store:OpenSearch] Verify Certs" Target="OPENSEARCH_VERIFY_CERTS" Default="true|false" Mode="" Description="Whether to verify OpenSearch TLS certificates. Leave false for self-signed homelab deployments." Type="Variable" Display="advanced" Required="false" Mask="false">false</Config>
<Config Name="[Vector Store:FAISS] Path" Target="FAISS_PATH" Default="" Mode="" Description="Optional FAISS storage path inside the container or a mounted host path." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Backup Export] User ID Filter" Target="EXPORT_USER_ID" Default="" Mode="" Description="Optional one-shot filter for the bundled upstream export helper script. Only set this if you intentionally run the export tooling inside the container." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Backup Export] App ID Filter" Target="EXPORT_APP_ID" Default="" Mode="" Description="Optional one-shot filter for the bundled upstream export helper script." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Backup Export] From Date (epoch)" Target="EXPORT_FROM_DATE" Default="" Mode="" Description="Optional one-shot export filter. Epoch timestamp in seconds." Type="Variable" Display="advanced" Required="false" Mask="false"/>
<Config Name="[Backup Export] To Date (epoch)" Target="EXPORT_TO_DATE" Default="" Mode="" Description="Optional one-shot export filter. Epoch timestamp in seconds." Type="Variable" Display="advanced" Required="false" Mask="false"/>
</Container>