This guide explains how to configure different detection backends for both light-object-detect API and lightNVR.
The light-object-detect API supports three detection backends:
- ONNX Runtime: Best performance with CUDA, excellent accuracy with YOLOv8
- TFLite: Lightweight, good for embedded systems, moderate accuracy
- OpenCV DNN: Good balance, supports many formats, decent performance
Download TFLite models using the provided script:
cd light-object-detect
# Download SSD MobileNet V1 (recommended for embedded systems)
bash scripts/download_tflite_models.sh --model-type ssd_mobilenet_v1
# Or download SSD MobileNet V2 (more accurate)
bash scripts/download_tflite_models.sh --model-type ssd_mobilenet_v2
# Or download EfficientDet Lite0 (best accuracy for TFLite)
bash scripts/download_tflite_models.sh --model-type efficientdet_lite0This will download the model to backends/tflite/models/ and create the labels file.
Download YOLOv8 ONNX models:
# Using the download script (requires wget)
bash scripts/download_models.sh --model-size n --output-dir models
# Or using Python (requires ultralytics package)
pip install ultralytics
python3 scripts/download_models.py --model-size n --output-dir modelsAvailable model sizes:
n(nano): ~6MB, fastests(small): ~22MB, balancedm(medium): ~52MB, better accuracyl(large): ~87MB, high accuracyx(xlarge): ~136MB, best accuracy
Download YOLO models for OpenCV:
cd models
# YOLOv4-tiny (smaller, faster)
wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.weights
wget https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4-tiny.cfg
# Or YOLOv4 (larger, more accurate)
wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights
wget https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4.cfgThe API can be configured via environment variables or query parameters.
For TFLite:
BACKEND=tflite
TFLITE_MODEL_PATH=backends/tflite/models/ssd_mobilenet_v1.tflite
TFLITE_LABELS_PATH=backends/tflite/models/coco_labels.txt
TFLITE_CONFIDENCE_THRESHOLD=0.5For ONNX:
BACKEND=onnx
ONNX_MODEL_PATH=models/yolov8n.onnx
ONNX_LABELS_PATH=models/coco_labels.txt
ONNX_CONFIDENCE_THRESHOLD=0.5
ONNX_IOU_THRESHOLD=0.45
ONNX_MODEL_TYPE=yolov8For OpenCV:
BACKEND=opencv
OPENCV_MODEL_PATH=models/yolov4-tiny.weights
OPENCV_CONFIG_PATH=models/yolov4-tiny.cfg
OPENCV_LABELS_PATH=models/coco_labels.txt
OPENCV_CONFIDENCE_THRESHOLD=0.5
OPENCV_NMS_THRESHOLD=0.4You can override the backend per request:
# Use TFLite backend
curl -X POST 'http://localhost:9001/api/v1/detect?backend=tflite' \
-F 'file=@image.jpg'
# Use ONNX backend
curl -X POST 'http://localhost:9001/api/v1/detect?backend=onnx' \
-F 'file=@image.jpg'
# Use OpenCV backend
curl -X POST 'http://localhost:9001/api/v1/detect?backend=opencv' \
-F 'file=@image.jpg'lightNVR can be configured to use a specific backend for all API detection calls.
[api_detection]
url = http://localhost:9001/detect
backend = tflite ; Options: onnx, tflite, opencv (default: onnx)You can also override the detection endpoint per stream using the web UI:
- Go to Streams page
- Click Edit on a stream
- Enable AI Detection Recording
- Select API Detection (light-object-detect) from the dropdown
- Click Override with Custom Endpoint
- Enter your custom URL (e.g.,
http://192.168.1.100:9001/detect) - Save the stream
The custom URL will be stored in the database and used for that specific stream.
cd light-object-detect
pipenv run uvicorn main:app --host 0.0.0.0 --port 9001cd /home/matteius/lightNVR
./bin/lightnvr# Download a test image
wget http://images.cocodataset.org/val2017/000000397133.jpg -O test.jpg
# Test with TFLite backend
curl -X POST 'http://localhost:9001/api/v1/detect?backend=tflite' \
-F 'file=@test.jpg' | jq
# Test with ONNX backend
curl -X POST 'http://localhost:9001/api/v1/detect?backend=onnx' \
-F 'file=@test.jpg' | jq# Check if lightNVR is calling the API
tail -f /var/lib/lightnvr/data/logs/lightnvr.log | grep "API Detection"You should see log messages like:
API Detection: Using URL with parameters: http://localhost:9001/detect?backend=tflite&confidence_threshold=0.5&return_image=false (backend: tflite)
| Backend | Speed | Accuracy | Memory | Best For |
|---|---|---|---|---|
| TFLite | Fast | Good | Low | Embedded systems, Raspberry Pi |
| ONNX | Very Fast* | Excellent | Medium | Desktop, servers with GPU |
| OpenCV | Medium | Good | Medium | General purpose, CPU-only |
*With CUDA GPU acceleration
# Download the model
cd light-object-detect
bash scripts/download_tflite_models.sh --model-type ssd_mobilenet_v1
# Verify the file exists
ls -lh backends/tflite/models/ssd_mobilenet_v1.tflite# Download the model
cd light-object-detect
bash scripts/download_models.sh --model-size n --output-dir models
# Verify the file exists
ls -lh models/yolov8n.onnx-
Check if light-object-detect is running:
curl http://localhost:9001/health
-
Check lightNVR config:
grep -A 2 "\[api_detection\]" config/lightnvr.ini -
Check lightNVR logs:
tail -f /var/lib/lightnvr/data/logs/lightnvr.log | grep "API Detection"
Check the lightNVR logs to see which backend is being used:
tail -f /var/lib/lightnvr/data/logs/lightnvr.log | grep "backend:"The log should show:
API Detection: Using URL with parameters: ... (backend: tflite)
If it's using the wrong backend, update config/lightnvr.ini:
[api_detection]
backend = tfliteThen restart lightNVR.
Install ONNX Runtime with CUDA support:
pip uninstall onnxruntime
pip install onnxruntime-gpuCheck if CUDA is available:
python3 -c "import onnxruntime as ort; print(ort.get_available_providers())"You can set different confidence thresholds per backend in the .env file:
TFLITE_CONFIDENCE_THRESHOLD=0.6
ONNX_CONFIDENCE_THRESHOLD=0.5
OPENCV_CONFIDENCE_THRESHOLD=0.55Or override via query parameter:
curl -X POST 'http://localhost:9001/api/v1/detect?backend=tflite&confidence_threshold=0.7' \
-F 'file=@image.jpg'- TFLite: Best for embedded systems (Raspberry Pi, low-power devices)
- ONNX: Best for servers with GPU acceleration
- OpenCV: Good general-purpose option for CPU-only systems
Choose the backend that best fits your hardware and accuracy requirements!