Planera is a chat-first analytics workspace for structured data. You sign in, upload CSV or JSON files, ask a business question, and review both the answer and the execution trail behind it.
The product is designed to feel closer to an analytics copilot than a generic "chat with files" tool:
- uploads are scoped to the signed-in user
- analysis runs through a bounded plan/query/execute loop
- answers come back with trace, SQL, result previews, and validation context
- conversation history and inspection snapshots are persisted for the main chat flow
- Sign in from the UI
- Upload one or more CSV or JSON files
- Start a chat or continue an existing conversation
- Ask a question against the attached uploads
- Review the answer, then open the inspection panel for SQL, results, trace, and validation details
Backend:
- FastAPI API
- SQLite for users, conversations, messages, and inspection snapshots
- DuckDB for uploaded data and query execution
- OpenAI or Gemini for the planning and answer-generation steps
Frontend:
- React + Vite workspace UI
- authenticated chat experience
- uploads management
- inspection drawer for execution details
Primary app flow:
POST /auth/signupPOST /auth/loginGET /auth/meGET /uploadsPOST /uploadsDELETE /uploads/{source_id}POST /chatGET /conversationsGET /conversations/{id}GET /inspections/{inspection_id}
Debug-only helper:
POST /analyze
Notes:
POST /chatis the main product API and is what the React app uses for real analysis turns.POST /analyzeis a deprecated debug path. It is stateless, still authenticated, and should not be treated as the normal integration path.
planera/
├── app/ # FastAPI backend
├── ui/ # React frontend
├── data/ # sample data, uploads, and DuckDB registry files
├── tests/
├── requirements.txt
├── docker-compose.yml
├── Dockerfile
└── README.md
cd planera
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
cp .env.example .env
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000In a second terminal:
cd ui
npm install
npm run devOpen:
Start from .env.example for backend setup, then override additional runtime paths or secrets as needed.
Most important:
LLM_PROVIDEROPENAI_API_KEYorGEMINI_API_KEYOPENAI_MODELorGEMINI_MODELDATABASE_PATHJWT_SECRET_KEYUPLOAD_STORAGE_DIRREGISTRY_PATHCORS_ALLOW_ORIGINS
Frontend settings live in ui/.env.example.
Most important:
VITE_API_BASE_URLVITE_API_FALLBACK_MODE
Backend:
source .venv/bin/activate
python -m pytestFrontend:
cd ui
npm run checkTo run both services together:
docker compose up --build- The current product flow is upload-first: the UI expects attached CSV or JSON files before submitting an analysis turn.
- The repository still contains sample CRM-style data under
data/, but the active app flow is centered on user uploads rather than a built-in warehouse connection. - Uploaded sources are scoped correctly, but separate uploads are not automatically joined just because they share similarly named columns.
- A valid API key is required for whichever LLM provider is configured.