Quick start guide to get StillMe up and running in minutes.
- Python 3.12+ installed
- API key from one of: DeepSeek, OpenAI, Claude, or Gemini
- (Optional) Docker for containerized setup
git clone https://github.com/anhmtk/StillMe-Learning-AI-System-RAG-Foundation.git
cd StillMe-Learning-AI-System-RAG-Foundation# Create virtual environment
python -m venv venv
# Activate (Windows)
venv\Scripts\activate
# Activate (Linux/Mac)
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt# Copy example env file
cp env.example .env
# Edit .env and add your API key (at minimum, one of):
# DEEPSEEK_API_KEY=sk-your-key-here
# OPENAI_API_KEY=sk-your-key-here
# ANTHROPIC_API_KEY=sk-ant-your-key-here
# GOOGLE_API_KEY=your-key-hereTerminal 1 - Backend:
python -m uvicorn backend.api.main:app --host 0.0.0.0 --port 8000 --reloadTerminal 2 - Dashboard:
streamlit run dashboard.py --server.port 8501- Dashboard: http://localhost:8501
- API: http://localhost:8000
- API Docs: http://localhost:8000/docs
That's it! You're ready to use StillMe.
# Copy environment template
cp env.example .env
# Edit .env with your API keys
# Start all services
docker compose up -d
# Check logs
docker compose logs -f
# Stop services
docker compose downAccess:
- Dashboard: http://localhost:8501
- API: http://localhost:8000
curl http://localhost:8000/api/statusShould return:
{
"status": "healthy",
"service": "stillme-backend"
}Via Dashboard:
- Open http://localhost:8501
- Type a question in the chat interface
- StillMe will respond with citations
Via API:
curl -X POST http://localhost:8000/api/chat/rag \
-H "Content-Type: application/json" \
-d '{
"message": "What is StillMe?",
"use_rag": true
}'- RAG Interface: See how StillMe retrieves context
- Validation Panel: Monitor validation chain results
- Learning Metrics: Track StillMe's learning progress
Required:
DEEPSEEK_API_KEYorOPENAI_API_KEY- LLM API key
Optional:
ENABLE_VALIDATORS=true- Enable validation chain (default: true)ENABLE_ARXIV=true- Enable arXiv fetching (default: true)ENABLE_WIKIPEDIA=true- Enable Wikipedia fetching (default: true)OLLAMA_URL=http://localhost:11434- For local Ollama
See env.example for full list.
- Read User Guide:
docs/USER_GUIDE.md - Explore Dashboard: Try different questions
- Check API Docs: http://localhost:8000/docs
- Read Contributing Guide:
CONTRIBUTING.md - Explore Architecture:
docs/ARCHITECTURE.md - Check Roadmap:
docs/PLATFORM_ENGINEERING_ROADMAP.md
Solution:
- Check
.envfile exists and has your API key - Restart backend after changing
.env - Verify API key is correct
Solution:
- First run may take time to initialize
- Check logs for initialization progress
- Ensure
data/vector_dbdirectory is writable
Solution:
- Change port in command:
--port 8001 - Or stop existing service on that port
Solution:
- Ensure virtual environment is activated
- Run
pip install -r requirements.txtagain - Check Python version:
python --version(should be 3.12+)
- Documentation: Check
docs/directory - GitHub Issues: Report bugs or ask questions
- Discussions: Share ideas and get help
Welcome to StillMe! 🎉
Start exploring and see what StillMe can do. Remember: StillMe admits when it doesn't know - that's a feature, not a bug!