This project is a local Chrome extension plus Python backend that helps you study supported coding-platform problems from inside the browser.
What it does:
- detects when a supported problem page is open
- extracts the visible problem statement and starter code from the current tab
- sends that data to a local backend
- generates a draft solution
- shows the result inside the extension popup
- lets you copy the generated code and notes manually
Important limitation:
- This project is a study assistant. It does not auto-submit solutions or claim hidden tests passed.
- It does not inject an in-page control panel anymore. The coding site page is left alone and the extension popup is the UI.
- It does not include stealth, anti-detection bypass, or any attempt to evade platform safeguards.
LeetCodeHackerRankCodeChefCodeforcesAtCoder
The extension uses a platform-adapter registry in extension/platforms.js, so more sites can be added cleanly.
extension/: Chrome extension filesbackend/: local Python backend and testsrun_backend.cmd: starts the backend using a known local Python interpreterrun_backend_tests.cmd: runs backend verification tests
- Start the backend:
run_backend.cmd- Load the Chrome extension:
- Open
chrome://extensions - Enable Developer Mode
- Click
Load unpacked - Select the
extensionfolder
-
Open a supported problem page.
-
Click the extension icon.
-
In the popup:
- configure the backend and AI provider if needed
- click
Solve Current Page - review the output in the popup
- use
Copy CodeorCopy Notes
- Double-click
run_backend.cmd. - In Chrome, open
chrome://extensions. - Turn on
Developer mode. - Click
Load unpackedand choose theextensionfolder inside this project. - Open a supported problem page.
- Click
Solve Current Pagein the popup. - Copy the generated code manually into the site editor if you want to test it.
- Keep submissions manual.
- Use it as a visible drafting tool, not a hidden automation layer.
- Avoid excessive request loops or unattended interaction on third-party platforms.
- If a site changes its DOM or editor integration, update selectors instead of trying to bypass platform behavior.
By default the backend can run in three useful modes:
Local Fallback: free, offline, weaker starter draftsOpenAI Compatible: any hosted OpenAI-compatible APIOllama Local: free local model runtime on your machine
The popup now includes a Quick Setup selector for the common paths.
- Install Ollama.
- Pull a local model, for example:
ollama pull llama3.2- Keep Ollama running.
- In the popup choose:
- Quick Setup:
Ollama Local (Free) - Model:
llama3.2or any model you have pulled - API URL:
http://127.0.0.1:11434/v1/chat/completions - API Key: leave blank
You can also copy provider_config.ollama.example.json to backend/provider_config.local.json.
The backend now supports both of these OpenAI-compatible endpoint styles:
https://.../v1/responseshttps://.../v1/chat/completions
That means you can use OpenAI-compatible providers beyond the OpenAI API itself as long as you supply:
- a supported model name
- the provider's base URL for
responsesorchat/completions - an API key when the provider requires one
The backend also supports environment variables before startup:
LEETBOT_PROVIDER=openaiLEETBOT_API_KEY=...LEETBOT_MODEL=...LEETBOT_API_URL=...
The backend uses only the Python standard library, so no package install is required.
Run:
run_backend_tests.cmdThis checks the provider fallback logic and the HTTP API round-trip.
- DOM selectors can change over time, so the content script is structured around per-platform adapters for easier updates.
- The extension now uses a popup-only workflow to avoid interfering with coding site editors or navigation.
This project is available under the MIT License.
Before publishing to GitHub:
- do not commit
backend/provider_config.local.json - rotate any API key that has already been pasted into chat, screenshots, or local files you may upload
- keep only the example config files in the public repo