A .NET Aspire application that imports your entire ChatGPT conversation history and makes it available as RAG (Retrieval-Augmented Generation) memory for any LLM.
![]() |
![]() |
![]() |
![]() |
- Import your ChatGPT conversation history — upload your full export; supports multi-file exports (ChatGPT now splits large exports across several JSON files) and histories of thousands of conversations
- Project support — conversations organised into ChatGPT projects are imported into project folders in MattGPT, with collapsible folder navigation and user-assignable names
- RAG memory — conversations are summarised, embedded, and indexed so any LLM can retrieve relevant context from your history when you chat
- Multi-turn chat — full conversation support with rolling summaries that keep context coherent across long sessions, even with small-context local models
- Persistent chat sessions — conversations in MattGPT are saved to MongoDB and embedded in the vector store, so they become part of your searchable memory over time
- Chat history sidebar — browse and resume past chat sessions, and read any imported conversation in a read-only viewer directly in the app
- Clickable source citations — each LLM response shows which past conversations were used as context; click any source to read the original conversation
- Configurable RAG modes — choose between full automatic injection (
WithPrompt), hybrid auto-RAG + tool-calling (Auto), or tool-only retrieval (ToolsOnly) - Multiple LLM providers — works with Ollama (local, default), Foundry Local, Azure OpenAI, OpenAI, Anthropic Claude, and Google Gemini
- Multiple vector stores — supports Qdrant (default), Azure AI Search, Pinecone, and Weaviate
Enable users to import their entire ChatGPT conversation history into a format that can be used as RAG memory for any Large Language Model. This allows users to leverage their past interactions with ChatGPT to enhance responses from other LLMs.
A .NET Aspire application consisting of:
- Blazor web frontend — upload UI and chat UI
- ASP.NET Core API — parsing, background processing, RAG pipeline
- MongoDB — stores full conversation data and metadata
- Vector store — stores embeddings for semantic search (Qdrant, Azure AI Search, Pinecone, or Weaviate)
- LLM — config-driven: Ollama, Foundry Local, Azure OpenAI, OpenAI, Anthropic, or Gemini
# Prerequisites: .NET 10 SDK, Docker Desktop
git clone https://github.com/matt-goldman/MattGPT.git
cd MattGPT/src/MattGPT.Web && npm install && cd ../..
# Pull default Ollama models
ollama pull llama3.2
ollama pull nomic-embed-text
# Start everything via Aspire
cd src/MattGPT.AppHost
dotnet runThe Aspire dashboard URL will be printed to the console. The web UI URL is also shown on startup.
| Document | Description |
|---|---|
| Getting Started | Prerequisites, setup, and first run |
| Configuration | LLM, vector store, and RAG settings |
| Integrations | Setup guides for each LLM and vector store provider |
| Usage | Uploading conversations, using the chat UI, API endpoints |
| Troubleshooting | Common issues, performance notes |
Planning and issue tracking lives in the docs/ folder — docs/Backlog/index.md is the system of record. This file-based backlog exists so that AI coding agents (both online and offline) can pick up work autonomously. Completed issues are archived in docs/Backlog/Done/ with full context of what was built and why.
If you'd like to suggest a feature or report a bug, please open a GitHub Issue. Approved items will be promoted into the docs backlog for implementation.
- Runtime configuration wizard — a guided setup experience so new users can configure the LLM provider and model without editing config files (see issue #14).
- Advanced parsing: sentiment analysis, topic modelling, entity extraction.
- Import of other file types (images, PDFs) shared in conversations.
- Integration with LM Studio, OpenWebUI, and other LLM tools.
- Automatic project reconstruction in other LLMs from imported history.



