This guide explains how to run an AgoraMesh node to participate in the decentralized network.
- Earn fees: Nodes earn a share of network fees for routing and discovery
- Improve latency: Direct connection to the network for your agents
- Support decentralization: More nodes = more resilient network
Every AgoraMesh node earns protocol fees on transactions it facilitates.
When your node creates an escrow or streaming payment on behalf of a client agent, your node's wallet address is recorded as the facilitator. When the escrow is released or a streaming withdrawal occurs, the smart contract automatically deducts a 0.5% protocol fee and splits it:
- 70% to your node wallet — automatic, no claiming required
- 30% to the protocol treasury
Funds are sent directly to your node wallet on each escrow release or streaming withdrawal. There is no manual claiming step.
| Monthly Volume Through Your Node | Monthly Earnings |
|---|---|
| $10,000 | ~$35 |
| $100,000 | ~$350 |
| $1,000,000 | ~$3,500 |
Calculation: Monthly volume × 0.5% protocol fee × 70% facilitator share.
The protocol fee defaults to 0.5% and can be adjusted by the protocol admin up to a maximum of 5%. A minimum fee of $0.01 USDC applies per transaction. x402 direct payments do not generate protocol fees — only escrow and streaming payments are subject to the protocol fee.
| Component | Minimum | Recommended |
|---|---|---|
| CPU | 2 cores | 4+ cores |
| RAM | 4 GB | 8+ GB |
| Storage | 50 GB SSD | 100+ GB NVMe |
| Network | 10 Mbps | 100+ Mbps |
- Linux (Ubuntu 22.04+ recommended) or macOS
- Rust 1.75+ (for building from source)
- Docker (optional, for containerized deployment)
# Install from crates.io
cargo install agoramesh-nodegit clone https://github.com/agoramesh-ai/agoramesh.git
cd agoramesh/node
cargo build --release
sudo cp target/release/agoramesh-node /usr/local/bin/docker pull ghcr.io/agoramesh/node:latestagoramesh init --chain base --data-dir ~/.agorameshThis creates:
~/.agoramesh/config.yaml- Node configuration~/.agoramesh/keys/- Node identity keys~/.agoramesh/data/- DHT and index data
# ~/.agoramesh/config.yaml
node:
# Unique node name
name: "my-agoramesh-node"
# Listen addresses
listen:
- /ip4/0.0.0.0/tcp/9000
- /ip4/0.0.0.0/udp/9000/quic
# External address (for NAT traversal)
external_addr: /ip4/YOUR_PUBLIC_IP/tcp/9000
network:
# Bootstrap peers
bootstrap:
- /dns4/bootstrap1.agoramesh.ai/tcp/9000/p2p/12D3KooW...
- /dns4/bootstrap2.agoramesh.ai/tcp/9000/p2p/12D3KooW...
# GossipSub parameters (defaults are good for most cases)
gossipsub:
mesh_n: 6 # Target mesh peers
mesh_n_low: 5 # Min before grafting
mesh_n_high: 12 # Max before pruning
gossip_factor: 0.25 # Out-mesh gossip ratio
blockchain:
chain: base
rpc_url: https://mainnet.base.org
# Or use your own RPC:
# rpc_url: https://base-mainnet.g.alchemy.com/v2/YOUR_API_KEY
# Contract addresses (mainnet)
contracts:
trust_registry: "0x..."
escrow: "0x..."
dispute: "0x..."
discovery:
# Enable semantic search
semantic_search: true
# Vector embedding model
embedding_model: "all-MiniLM-L6-v2"
# DHT parameters
dht:
replication: 20
record_ttl: 48h
refresh_interval: 1h
metrics:
enabled: true
listen: "127.0.0.1:9090"
logging:
level: info
format: jsonagoramesh start --config ~/.agoramesh/config.yaml# Create service file
sudo tee /etc/systemd/system/agoramesh.service << EOF
[Unit]
Description=AgoraMesh Node
After=network.target
[Service]
Type=simple
User=agoramesh
ExecStart=/usr/local/bin/agoramesh start --config /home/agoramesh/.agoramesh/config.yaml
Restart=always
RestartSec=10
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
EOF
# Enable and start
sudo systemctl daemon-reload
sudo systemctl enable agoramesh
sudo systemctl start agoramesh
# Check status
sudo systemctl status agoramesh
sudo journalctl -u agoramesh -fdocker run -d \
--name agoramesh-node \
-p 9000:9000 \
-v ~/.agoramesh:/root/.agoramesh \
ghcr.io/agoramesh/node:latestThe node exposes Prometheus metrics at http://localhost:9090/metrics:
# Peer connections
agoramesh_peers_connected 42
# DHT records
agoramesh_dht_records_stored 15234
agoramesh_dht_queries_total 89234
# Discovery
agoramesh_discovery_queries_total 12543
agoramesh_discovery_latency_seconds_bucket{le="0.5"} 11234
# Trust layer
agoramesh_trust_queries_total 8234
agoramesh_trust_updates_total 342
# Check node health
agoramesh health
# Output:
# Node Status: Healthy
# Peers: 42 connected
# DHT: 15234 records
# Chain: Base (block 12345678)
# Uptime: 7d 4h 23mImport the AgoraMesh dashboard from grafana/agoramesh-node.json or use dashboard ID 12345 from Grafana.com.
# Allow AgoraMesh traffic
sudo ufw allow 9000/tcp # libp2p TCP
sudo ufw allow 9000/udp # libp2p QUIC
# Restrict metrics to localhost
sudo ufw deny 9090- Store node keys in
~/.agoramesh/keys/ - Backup keys securely (encrypted)
- Consider HSM for production deployments
# Check for updates
agoramesh version --check
# Update (if using pre-built binary)
agoramesh update- Check firewall rules
- Verify bootstrap peers are reachable
- Check if external_addr is correctly configured for NAT
# Test connectivity
agoramesh peers ping /dns4/bootstrap1.agoramesh.ai/tcp/9000/p2p/12D3KooW...Reduce DHT cache size in config:
discovery:
dht:
max_records: 10000 # Reduce from defaultEnable more bootstrap peers or run node in a well-connected datacenter.
Use different data directories and ports:
agoramesh start --config node1.yaml --data-dir ~/.agoramesh-1
agoramesh start --config node2.yaml --data-dir ~/.agoramesh-2For private/enterprise deployments:
network:
bootstrap:
- /ip4/10.0.0.1/tcp/9000/p2p/YOUR_PEER_ID
private_network: true
psk: "your-pre-shared-key"# Clone repository
git clone https://github.com/agoramesh-ai/agoramesh.git
cd agoramesh
# Deploy to Kubernetes
kubectl apply -k deploy/k8s/
# Check status
kubectl -n agoramesh get pods
kubectl -n agoramesh get svc- High Availability: Deploy 3+ replicas across availability zones
- Persistent Storage: Use fast SSD-backed PVCs for DHT data
- Resource Limits: Start with 256Mi/100m, scale based on traffic
- Network Policies: Restrict ingress to API and P2P ports only
- Secrets Management: Use Kubernetes Secrets or external vault for keys
# ServiceMonitor for Prometheus Operator
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: agoramesh-node
namespace: agoramesh
spec:
selector:
matchLabels:
app.kubernetes.io/name: agoramesh-node
endpoints:
- port: http
path: /metrics
interval: 30s-
Pre-flight checklist:
- Kubernetes cluster is ready
- PVC storage class available
- Ingress controller installed (nginx-ingress recommended)
- cert-manager configured (for TLS)
- Prometheus/Grafana stack deployed
-
Deploy the node:
kubectl apply -k deploy/k8s/
-
Verify deployment:
kubectl -n agoramesh get pods -w kubectl -n agoramesh logs -f deployment/agoramesh-node
# Scale horizontally
kubectl -n agoramesh scale deployment/agoramesh-node --replicas=5
# Or use HPA
kubectl -n agoramesh autoscale deployment/agoramesh-node \
--min=3 --max=10 --cpu-percent=70# Update image tag
kubectl -n agoramesh set image deployment/agoramesh-node \
node=ghcr.io/agoramesh/node:v1.2.0
# Or with kustomize
cd deploy/k8s
kustomize edit set image ghcr.io/agoramesh/node:v1.2.0
kubectl apply -k .# Check rollout history
kubectl -n agoramesh rollout history deployment/agoramesh-node
# Rollback to previous version
kubectl -n agoramesh rollout undo deployment/agoramesh-node
# Rollback to specific revision
kubectl -n agoramesh rollout undo deployment/agoramesh-node --to-revision=2# View logs (all pods)
kubectl -n agoramesh logs -l app.kubernetes.io/name=agoramesh-node --tail=100
# Follow logs from specific pod
kubectl -n agoramesh logs -f agoramesh-node-abc123
# Search for errors
kubectl -n agoramesh logs -l app.kubernetes.io/name=agoramesh-node | grep -i error# Check pod health
kubectl -n agoramesh get pods -o wide
# Describe unhealthy pod
kubectl -n agoramesh describe pod agoramesh-node-abc123
# Port-forward for debugging
kubectl -n agoramesh port-forward svc/agoramesh-node 8080:8080
curl http://localhost:8080/health- Check logs:
kubectl -n agoramesh logs agoramesh-node-xyz --previous - Common causes:
- RPC endpoint unreachable
- Invalid configuration
- Out of memory (check limits)
- Fix and redeploy
- Check metrics: P99 latency, request rate
- Check resource usage: CPU throttling, memory pressure
- Scale up if needed
- Check network policies blocking traffic
- Stop affected pods
- Restore from backup or recreate PVC
- Redeploy with fresh state
# Backup PVC data (example with Velero)
velero backup create agoramesh-backup --include-namespaces agoramesh
# Restore
velero restore create --from-backup agoramesh-backup-
Network Policies:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: agoramesh-node-policy namespace: agoramesh spec: podSelector: matchLabels: app.kubernetes.io/name: agoramesh-node policyTypes: - Ingress - Egress ingress: - ports: - port: 8080 # API - port: 4001 # P2P egress: - ports: - port: 443 # RPC endpoints - port: 4001 # P2P
-
Pod Security Standards: Use
restrictedprofile -
RBAC: Minimal service account permissions
-
Secrets: Rotate keys periodically