The system mimics a GIS expert’s reasoning by breaking down queries into logical steps, then executing them using code generated by CodeLLaMA-7B-Instruct with language understanding powered by LLaMA-3.1-8B-it-Q4_K_M both models runnable on mid-tier GPUs (We are using a Nvidia RTX 3050 4GB VRAM Laptop GPU + 16 GB RAM for development). With vector databases for context and entity resolution for geographic precision, the assistant makes spatial analysis conversational and accessible. Applications include flood risk assessment, urban planning, and disaster response shifting from manual tools to intelligent, language-driven workflows.
Techstack: Next.js, Express/NestJS, Apache Kafka, Llama.cpp-python, python, ChromaDB, Postgres, gRPC, Docker
Note: This project is currently under active development.
-
Combines the power of GIS Workflows and the ease of use of NLP into a single platform, allowing everyone to use GIS Workflows
-
Enhances threat perception and allows the user to have a more clearer picture of what's going on (demonstrated in Workflow diagram above)
-
System is accessible even when internet is down through a helpline number and
faster-whisperfor speech to text -
Easily extendable, code generation is constrained to use a specific exposed functional interface, new workflows can simply be added in the interface and the system can use it
This project uses a monorepo structure. Please ensure you have:
- Python: Python 3.12.3
- Poetry: For python dependency management
- NodeJS: JavaScript Runtime (v20.18.0 (future version support not gurantteed))
- Node Version Manager: To manage multiple versions of NodeJS
- pnpm: Faster and disk-space effecient alternative to
npm, optional but recommended - Make: For general project related workflows
- Docker: For orchestration and deployment
- Clone the repo:
git clone https://github.com/B4S1C-Coder/AnalytIQ-GIS.git && cd AnalytIQ-GIS- Install JavaScript dependencies:
pnpm install- Install Python dependencies:
poetry install- Download the respective
.gguffiles and place them into the respective./models/<model-name>, recommended to take the.Q4_K_M.gguffiles only (link in Model Credits)
This project would not have been possible without the following quantized models:
Meta-Llama-3.1-8b-Instruct: bartowski/Meta-Llama-3.1-8B-Instruct-GGUFCodeLlama-7B-Instruct: TheBloke/CodeLlama-7B-Instruct-GGUFphi-2: TheBloke/phi-2-GGUF


