Documentation
Everything you need to install, configure, and operate Nexor across all three products.
What is Nexor?
Nexor is a full-stack network observability platform built for security and operations teams. It consists of three products that work independently or together: a native Desktop App for live packet capture, a headless Capture Agent for server and cloud deployments, and a SaaS Hub that aggregates flows from any number of agents into a searchable, alerting-capable analytics platform.
The entire pipeline — from raw packets to searchable flows — is designed for high throughput with minimal overhead. The capture core is written in Rust; the Hub backend uses Go with ClickHouse for storage, giving you sub-second query times even on hundreds of millions of flows.
How the products connect
Nexor Desktop
A native macOS and Windows application that turns your laptop or workstation into a real-time packet analyser. No tcpdump, no Wireshark filters — just open the app, pick an interface, and see enriched, decoded flows in milliseconds.
Rust Capture Core
Zero-copy packet processing with libpcap bindings. Sub-millisecond first-packet latency at line rate on a MacBook.
Deep Protocol Decode
Full reassembly and decode for DNS, TLS 1.2/1.3, HTTP/1.1, HTTP/2, gRPC, ICMP, and ARP out of the box.
GeoIP Enrichment
Every external IP is tagged with country, ASN, and org automatically using bundled MaxMind GeoLite2 databases.
Threat Intel Badges
IPs are cross-referenced against bundled threat feeds. Malicious or suspicious hosts surface a coloured threat badge instantly.
Process Attribution
On macOS, each flow is annotated with the originating process name and PID — no guesswork about which app is making that connection.
Hub Connect Mode
One toggle streams all captured flows to your Nexor Hub, letting you centralise data from every machine in your team.
Installation
Nexor Desktop ships as a signed .dmg (macOS) and .exe installer (Windows). Download from the download page and run the installer — no additional runtime dependencies required.
| Platform | Requirement | Notes |
|---|---|---|
| macOS 13+ | Apple Silicon or Intel, 4 GB RAM | Notarization pending — use right-click Open |
| Windows 10/11 | x64, Npcap or WinPcap installed | Download Npcap free edition if not already installed |
Live Capture
Select any available network interface (en0, Wi-Fi, Ethernet, VPN tunnel) from the interface picker at the top of the window. Capture starts immediately. You can switch interfaces without restarting the application.
The flow table refreshes in real time. Each row represents a completed or active TCP/UDP flow with protocol, source/dest IPs and ports, packet/byte counts, duration, and any decoded application data. Scroll back through the session history — flows are retained in memory until the session is cleared or the app restarts.
Protocol Support
Nexor decodes the following application-layer protocols with full field extraction:
| Protocol | Decoded Fields | Notes |
|---|---|---|
| DNS | Query name, type (A/AAAA/MX/TXT/…), response IPs, TTL, RCODE | UDP & TCP DNS, DoT detection |
| TLS | SNI hostname, cipher suite, TLS version, cert SANs, issuer, expiry | TLS 1.0–1.3; certificate sidebar on click |
| HTTP/1.1 | Method, URI, Host header, status code, response size, user-agent | Full request/response in detail pane |
| HTTP/2 | Method, :authority, :path, status, HPACK headers, stream IDs | ALPN negotiation-based detection |
| gRPC | Service, method, status code, content-type, frame metadata | Detected on HTTP/2 + application/grpc content-type |
| ICMP | Type, code, echo id/seq for ping traffic | IPv4 and IPv6 |
| ARP | Operation (request/reply), sender/target IP and MAC | Layer-2 visibility for LAN monitoring |
Threat Intelligence
Every external IP is checked against bundled threat intelligence feeds on first observation. Matching IPs receive a coloured badge in the flow table:
| Badge | Meaning |
|---|---|
| HIGH | Known malware C2, active botnet node, or confirmed threat actor IP |
| MED | Suspicious — tor exit node, scanning host, or grey-listed IP |
| LOW | Watchlisted — VPN endpoint, proxy, or data centre IP with no active threat |
Feeds are embedded in the application binary and updated with each release. When Hub Connect Mode is active, the Hub's live threat feed is used instead, giving you real-time intelligence updates without waiting for a new app version.
UI Panes
The Desktop UI uses a three-pane layout. Each pane is resizable and can be collapsed to focus on what matters.
| Pane | Contents |
|---|---|
| Flow List (left) | Scrollable, real-time table of all captured flows. Click any row to select it and populate the detail panes. |
| Flow Detail (top right) | Full decoded fields for the selected flow — protocol headers, decoded payload, GeoIP, threat badge, and process name. |
| Analytics / TLS Sidebar (bottom right) | Context-sensitive: shows the TLS certificate chain for TLS flows, HTTP analytics (top hosts, methods, status codes) for HTTP flows, and a service dependency graph for all traffic. |
Hub Connect Mode
Enable Hub Connect Mode in Settings → Hub Connection. Enter your Hub URL and an API token generated in Hub Settings → Tokens. Once connected, all flows captured on this machine are streamed to the Hub in real time.
# Settings → Hub Connection
Hub URL : https://your-hub.example.com
API Token : nst_xxxxxxxxxxxxxxxxxxxx
Auto-start : ✓ (reconnect on app launch)
Label : Alice-MacBook # appears in Hub Fleet view
The connection is a persistent WebSocket over TLS. If the Hub is temporarily unreachable, the Desktop buffers up to 50,000 flows in memory and replays them on reconnect.
Nexor Capture Agent
A lightweight, headless capture daemon designed for servers, cloud VMs, Kubernetes nodes, and network appliances. Deploy it once and it reports flows to the Hub continuously — no GUI, no manual intervention, zero local storage.
Dual Capture Modes
Choose libpcap for maximum compatibility or eBPF for kernel-level capture with near-zero CPU overhead on Linux 5.8+.
One-Line Enrollment
A single curl command downloads, installs, and enrolls the agent. The Hub generates a unique token per agent for secure identification.
Multi-Interface
Capture on any number of interfaces simultaneously. Useful for tap ports, SPAN sessions, and bonded NICs.
Cloud VPC Support
Ingest VPC Flow Logs from AWS, GCP, and Azure directly in the Hub, complementing agent-based capture for cloud workloads.
Remote Config Push
Update capture filters, interface selection, or sampling rate from the Hub Fleet page — no SSH required.
Cluster Labels
Tag agents with environment, region, or role labels. Filter flows in the Hub by any label combination.
Installation
The quickest path is the Hub-hosted install script. Navigate to Hub → Fleet → Add Agent and copy the generated command:
# Generated by Hub — includes your token automatically
curl -sSL "https://your-hub.example.com/api/v1/agents/install?token=nst_xxx" | sudo bash
This script detects your OS and architecture, downloads the correct binary, installs it to /usr/local/bin/nexor-agent, writes a systemd unit (Linux) or launchd plist (macOS), and starts the service automatically.
Capture Modes
pcap mode (default)
Uses libpcap to capture packets from network interfaces. Works on all platforms. Suitable for moderate traffic volumes. Root or CAP_NET_RAW capability required.
nexor-agent \
--hub https://hub.example.com \
--token nst_xxxxxxxxxxxxxxxx \
--iface eth0 \
--mode pcap \
--filter "not port 22" # exclude SSH
eBPF mode (Linux 5.8+)
Attaches an XDP / TC eBPF program directly in the kernel's network stack. Dramatically lower CPU usage at high packet rates. No buffer drops under sustained 10 Gbps. Requires Linux kernel 5.8+ and root.
nexor-agent \
--hub https://hub.example.com \
--token nst_xxxxxxxxxxxxxxxx \
--iface eth0 \
--mode ebpf
Enrollment & Tokens
Each agent authenticates to the Hub using a unique enrollment token. Tokens are generated in the Hub under Fleet → Add Agent or via the API. Tokens can be revoked at any time from the Fleet page without restarting the agent — it will receive a 401 and stop forwarding flows.
| Token Scope | Description |
|---|---|
| agent-enroll | One-time token used during initial enrollment to exchange for a long-lived agent credential |
| agent-write | Long-lived credential stored on the agent. Used for ongoing flow submission |
| api-read | For Hub API read access from external tools or scripts |
| api-admin | Full Hub API access including agent management and configuration |
Configuration
The agent can be configured via command-line flags, environment variables, or a YAML config file at /etc/nexor/agent.yaml.
hub: https://hub.example.com
token: nst_xxxxxxxxxxxxxxxx
mode: ebpf # pcap | ebpf
ifaces:
- eth0
- eth1
filter: "not port 22"
labels:
env: production
region: us-east-1
role: web
batch_size: 500 # flows per HTTP push
flush_interval: 2s # max wait before push
log_level: info # debug | info | warn | error
Remote Config Push
From Hub → Fleet → [Agent] → Configure, you can remotely update the filter expression, interface list, or labels without restarting the agent. Configuration changes are pushed over the existing WebSocket connection and applied within seconds.
Nexor Hub
The Hub is the central brain of a Nexor deployment. It receives flows from any number of agents and Desktop instances, stores them in ClickHouse, and exposes a rich web UI and REST API for search, alerting, anomaly detection, compliance, and more.
ClickHouse Storage
MergeTree-based columnar storage. Sustains millions of flows per second with sub-second query latency on billions of rows.
Flow Search
Full-text and structured search over every flow field. Filter by protocol, IP, port, country, ASN, process, label, and more.
Alerting
Rule-based alerts with conditions on any flow field. Deliver to email, Slack, PagerDuty, or any webhook.
AI Copilot
Natural-language interface over your flow data, powered by Claude. Ask questions, get SQL, get answers — in plain English.
Custom Dashboards
Drag-and-drop widget builder. Stat tiles, time-series charts, pie charts, top-talker tables, alert and anomaly feeds.
Compliance Reports
One-click exports for SOC 2, PCI-DSS, ISO 27001, and HIPAA. Covers TLS hygiene, unencrypted traffic, and anomalies.
Deployment
The Hub is distributed as a single Go binary plus a ClickHouse database. The quickstart script handles both:
curl -sSL https://nexor.io/hub-quickstart.sh | sudo bash
# → installs ClickHouse, downloads hub binary, creates systemd service
# → hub available at http://localhost:8080 within ~60 seconds
Environment Variables
CH_ADDR=localhost:9000
CH_DB=nexor
APP_URL=https://hub.example.com
FRONTEND_URL=https://hub.example.com
SESSION_SECRET=change-me-64-random-chars
LICENSE_KEY=ENT-xxxx-xxxx-xxxx # omit for Community tier
ANTHROPIC_API_KEY=sk-ant-… # required for AI Copilot
PRODUCTION=true # sets Secure cookies
Flows & Analytics
All flows from all agents land in the Hub's flows ClickHouse table. The Flows page provides a real-time view with powerful search and filter controls.
Searching Flows
| Filter | Example | Notes |
|---|---|---|
| src_ip / dst_ip | 10.0.0.1 | Exact or CIDR match |
| protocol | TLS, DNS, HTTP | Case-insensitive |
| port | 443, 8080 | Matches src or dst port |
| country | CN, RU | ISO 3166-1 alpha-2 |
| label | env=production | Agent label key=value |
| threat | high, medium | Threat badge filter |
| bytes | >1MB | Filter large transfers |
| time range | Last 1h / 24h / 7d / custom | Applied to all queries |
Services View
The Services page automatically groups flows by (src_service, dst_service, protocol) and shows a dependency graph. Useful for discovering undocumented service relationships or lateral movement patterns.
Cloud VPC Ingestion
Import VPC Flow Logs from AWS (S3 or CloudWatch Logs), GCP (Cloud Logging), or Azure (Network Watcher) under Cloud Flows. The Hub normalises all formats into the same flow schema for unified analysis.
Security Features
Alerting
Create alert rules under Alerts → Rules. Each rule specifies a condition (e.g. any flow to a HIGH-threat IP in the last 5 minutes), a severity, and one or more notification channels.
| Condition Type | Example |
|---|---|
| Threshold | Bytes out > 100 MB in 10 min from single host |
| Geo-based | Any outbound connection to country CN, RU, KP, IR |
| Threat badge | Any flow touching a HIGH threat IP |
| Protocol anomaly | DNS query count > 1000/min from single source |
| New destination | First-ever flow to this IP (zero-day baseline) |
| Port scan | >20 distinct dst ports from single src in 60 seconds |
Sigma Detection Rules
Nexor Hub implements a Sigma-compatible rule engine (Detection page). Import standard Sigma YAML rules or write custom ones. Rules are evaluated continuously against the incoming flow stream, with matches creating incidents automatically.
title: Suspicious DNS Tunnelling
id: nexor-dns-tunnel-001
status: stable
level: high
detection:
selection:
protocol: DNS
query_len|gte: 60
bytes_out|gte: 4096
condition: selection
falsepositives:
- AAAA record lookups for long CDN hostnames
Anomaly Detection
The anomaly engine builds a statistical baseline for each (src_ip, dst_ip, protocol) tuple over a rolling 7-day window. Deviations from the baseline — in bytes, flow count, or inter-arrival time — generate anomaly events visible on the Anomalies page. Each anomaly has a severity score and a natural-language description generated by the AI Copilot.
Threats
The Threats page aggregates all flows touching IPs from the threat intelligence feeds, grouped by threat actor and severity. Drill down into any threat entry to see the full timeline of connections, source agents, and associated alert history.
Incidents
Incidents are auto-created from high-severity alert or Sigma rule matches. Each incident gets a unique ID, a severity (P1–P4), and a status (open / in-progress / resolved). Assign incidents to team members, add investigation notes, and close with a resolution summary. Full audit trail is maintained.
Certificates
The Certs page shows every TLS certificate seen in flows, with expiry date, issuer, SANs, and the set of hosts that presented it. Expired or soon-to-expire certificates are highlighted. Certificate changes (e.g. unexpected issuer rotation) can trigger alerts.
Policies
Define network policies — permitted and denied communication paths — under the Policies page. Nexor automatically flags policy violations as they occur in the flow stream.
AI Copilot
The AI Copilot is a natural-language interface to your flow data, powered by Anthropic's Claude. Click AI Copilot in the sidebar to open the chat panel.
| You can ask… | What happens |
|---|---|
| "Show me all DNS queries to .ru domains in the last hour" | Copilot writes a ClickHouse SQL query, executes it, and returns a formatted table |
| "Which host sent the most bytes yesterday?" | Aggregation query over the flows table, result returned as a ranked list |
| "Explain this anomaly" | Copilot pulls the anomaly context and provides a plain-English explanation with recommended next steps |
| "Is this IP suspicious?" | Looks up threat intel, GeoIP, and recent flow history for the IP |
| "Write a Sigma rule for port scan detection" | Generates a valid Sigma YAML rule you can paste into the Detection page |
The Copilot enforces a row-limit cap on all generated SQL (default 1,000 rows) to prevent accidental full-table scans. Administrators can raise this limit per-user from Settings.
Custom Dashboards
Build any view you need from the My Dashboards page. Each dashboard is a collection of widgets arranged on a 12-column grid.
Available Widgets
| Widget | Description | Config |
|---|---|---|
| Stat | Single big-number KPI with trend indicator | Metric: total flows, active agents, bytes (24h), anomaly count, alert count |
| Time Series | Line chart of flow volume or bytes over time | Window: 1h / 6h / 24h |
| Protocol Pie | Donut chart of top protocols by flow count | — |
| Top Talkers | Ranked list of busiest source IPs | Window, rank by flows or bytes, top N (5/10/20) |
| Alert Feed | Live feed of recent alert events | Max rows (5/8/15) |
| Anomaly Feed | Live feed of recent anomaly events | Max rows, severity filter (all/high/medium+/low+) |
Widget Sizes
Each widget can be resized independently to Small (⅓ width), Medium (½ width), or Full Width using the resize controls that appear on hover in edit mode. Use the arrow buttons to reorder widgets within the grid.
Enterprise Features
The following features require an Enterprise license (LICENSE_KEY environment variable):
| Feature | Description |
|---|---|
| SSO / SAML ENT | SAML 2.0 and OIDC identity provider integration. Supports Okta, Azure AD, Google Workspace, and any standards-compliant IdP. |
| Teams & RBAC ENT | Organise users into teams. Assign dashboard, alert, and data access permissions per team. |
| Compliance Reports ENT | Automated PDF/CSV reports for SOC 2, PCI-DSS, ISO 27001, HIPAA. Schedule weekly delivery to stakeholders. |
| Integrations ENT | Pre-built integrations for Splunk, Elastic SIEM, Datadog, PagerDuty, and custom webhook targets for alert and anomaly events. |
| External Storage ENT | Route ClickHouse cold data to S3-compatible object storage for long-term retention beyond 90 days. |
| Audit Log ENT | Immutable audit trail of all user actions — login events, configuration changes, data exports, and API key usage. |
API Reference
The Hub exposes a REST API at /api/v1/. All endpoints require a bearer token via the Authorization header.
curl -H "Authorization: Bearer nst_xxx" \
https://hub.example.com/api/v1/flows?limit=50&protocol=TLS
| Endpoint | Method | Description |
|---|---|---|
| /api/v1/flows | GET | Query flows with filter params |
| /api/v1/flows/ingest | POST | Bulk ingest flows (agent use) |
| /api/v1/stats | GET | Aggregate stats (counts, bytes) |
| /api/v1/timeseries | GET | Time-bucketed flow/byte counts |
| /api/v1/alerts | GET | List alert events |
| /api/v1/alerts/rules | GET POST | Manage alert rules |
| /api/v1/anomalies | GET | List anomaly events |
| /api/v1/agents | GET | List registered agents |
| /api/v1/agents/install | GET | Generates install script for agent |
| /api/v1/dashboards | GET POST | List / create dashboards |
| /api/v1/dashboards/:id | GET PUT DELETE | Get / update / delete dashboard |
| /api/v1/sigma | GET POST | Manage Sigma detection rules |
| /api/v1/incidents | GET POST | List / create incidents |
| /api/v1/copilot | POST | AI Copilot chat (SSE streaming) |
nexor-sdk
The official Python SDK gives you programmatic access to every Hub capability — query flows, manage alert rules, stream real-time events, build dashboards, triage incidents, run Sigma rules, and chat with the AI Copilot — all from a clean, idiomatic Python API.
Zero runtime dependencies. The SDK uses Python's stdlib urllib exclusively. No requests, no httpx required. Works out of the box on Python 3.9+.
8 Resources
flows, alerts, anomalies, agents, dashboards, incidents, sigma, copilot — one accessor per capability.
Zero Dependencies
Pure stdlib urllib. No third-party packages needed. Works anywhere Python runs.
Real-time Streaming
SSE flow stream and token-by-token AI Copilot streaming built in with iter_all() auto-pagination.
Full Type Coverage
All API responses are typed dataclass models. IDE autocomplete and mypy strict mode supported.
Installation
pip install nexor-sdk
from nexor_sdk import Nexor
ns = Nexor(
url="https://hub.example.com",
token="nst_xxxxxxxxxxxxxxxxxxxx",
)
# Verify connection and token
assert ns.ping(), "Cannot reach hub"
# Hub-wide stats
stats = ns.stats()
print(f"{stats.total_flows:,} flows · {stats.active_agents} agents online")
Flows
# Query with filters
flows = ns.flows.list(
protocol="TLS", country="CN", hours=24, limit=500
)
for f in flows:
print(f.src_ip, "→", f.dst_ip, f.tls.sni if f.tls else "")
# Auto-paginating iterator — fetches all pages automatically
for flow in ns.flows.iter_all(threat_level="high", hours=72):
process(flow)
# Real-time SSE stream (blocks until disconnected)
for flow in ns.flows.stream():
if flow.is_threat:
alert(flow)
# Top talkers by bytes in the last hour
talkers = ns.flows.top_talkers(window="1h", by="bytes", limit=10)
| Field | Type | Description |
|---|---|---|
| flow.src_ip / dst_ip | str | Source and destination IP addresses |
| flow.protocol | str | "TLS", "DNS", "HTTP", "HTTP2", "GRPC", "ICMP", "ARP" |
| flow.total_bytes | int | Computed property: bytes_in + bytes_out |
| flow.threat_level | str | "high", "medium", "low", or "" |
| flow.is_threat | bool | True if threat_level is "high" or "medium" |
| flow.process_name / pid | str / int | eBPF mode only — originating process |
| flow.tls | TlsFlow | None | TLS fields: sni, version, cert_cn, cert_expiry… |
| flow.dns | DnsFlow | None | DNS fields: query_name, query_type, answers… |
| flow.http | HttpFlow | None | HTTP fields: method, path, status, latency_ms |
Alerts & Incidents
# Create an alert rule
rule = ns.alerts.create_rule(
name="Exfil to known-bad country",
condition="country_code = 'CN' AND bytes_out > 1000000",
severity="high",
integration="webhook",
webhook_url="https://hooks.slack.com/services/xxx",
)
# Query fired events
events = ns.alerts.list_events(severity="critical", hours=24)
# Full incident lifecycle
inc = ns.incidents.create(
title="Possible C2 beaconing from 10.0.0.5",
severity="P2",
)
ns.incidents.acknowledge(inc.id)
ns.incidents.add_note(inc.id, "Confirmed — isolating host")
ns.incidents.resolve(inc.id, "Host isolated, threat remediated")
AI Copilot & Dashboards
# One-shot question (buffered reply)
answer = ns.copilot.ask("Which host had the most outbound bytes today?")
print(answer)
# Streaming — token by token
for token in ns.copilot.stream("Show DNS queries to .ru in the last hour"):
if token.sql:
print(f"\n[SQL] {token.sql}")
else:
print(token.text, end="", flush=True)
# Multi-turn conversation (remembers context)
chat = ns.copilot.chat()
print(chat.send("What's our top talker today?"))
print(chat.send("What ports does it use?"))
from nexor_sdk.resources.dashboards import DashboardsResource as D
dash = ns.dashboards.create(
name="Security Overview",
widgets=[
D.stat_widget("Alerts (24h)", metric="alert_count", size="sm"),
D.stat_widget("Anomalies (24h)", metric="anomaly_count", size="sm"),
D.stat_widget("Active Agents", metric="active_agents", size="sm"),
D.timeseries_widget("Flow Volume", window="24h", size="lg"),
D.top_talkers_widget(by="bytes", limit=10, size="md"),
D.alert_feed_widget(limit=8, size="md"),
],
)
print(f"Dashboard URL: {HUB}/dashboards/{dash.id}")
Runnable Examples
Four ready-to-run scripts ship in sdk/python/examples/:
| Script | What it does |
|---|---|
| hunt_threats.py | Query high-threat flows from the last hour, group by source host, auto-create a P2 incident for every host with ≥ 3 hits |
| export_flows_csv.py | Export all TLS flows (with full cert metadata) to a CSV file. Supports --protocol, --hours, --limit flags |
| copilot_chat.py | Interactive terminal AI Copilot session with streaming output and multi-turn conversation history |
| build_dashboard.py | Idempotently create (or update) a "Security Overview" dashboard with 7 widgets from a single script |
from nexor_sdk import NexorError, AuthError, RateLimitError
try:
flows = ns.flows.list(protocol="TLS")
except AuthError:
print("Invalid or expired token")
except RateLimitError:
print("Rate limited — back off and retry")
except NexorError as e:
print(f"HTTP {e.status_code}: {e}")
Privacy & PII Masking
Nexor's PII masking engine runs inside the Rust capture agent, redacting sensitive data
before any flow is batched or transmitted to the Hub. Authorization tokens, session cookies, payment
card numbers, passwords, and identity fields are replaced with the literal string
[REDACTED] at parse time — they never touch the network, never reach ClickHouse,
and never appear in logs.
This is a source-side guarantee, not a post-hoc filter. The masking runs in the same CPU cycle as the HTTP parser — there is no window during which sensitive data is stored unredacted anywhere on the agent host.
With PII masking enabled, Nexor does not transmit personal data (Art. 4(1) GDPR) to the Hub. Your ClickHouse instance stores only network-level metadata and redacted application-layer payloads.
Sensitive header redaction
The following HTTP/1.1 and HTTP/2 header names are redacted automatically. Matching is
case-insensitive (Authorization, AUTHORIZATION,
and authorization are all caught):
| Header | Why it's sensitive |
|---|---|
| Authorization | Bearer tokens, Basic credentials, API keys |
| Proxy-Authorization | Proxy credentials |
| Cookie | Session tokens, auth cookies |
| Set-Cookie | New session values set by server |
| X-Api-Key | Service API keys |
| X-Auth-Token | Service-specific auth tokens |
| X-Access-Token | OAuth access tokens |
| X-Session-Token | Session identifiers |
| X-CSRF-Token | Cross-site request forgery tokens |
| X-Forwarded-Authorization | Forwarded auth in proxy chains |
| Api-Key | Generic API key header |
| X-Amz-Security-Token | AWS temporary credentials |
| X-Goog-Api-Key | Google Cloud API keys |
JSON body field redaction
When a request or response body preview is valid JSON, the engine walks the object recursively and redacts the value of any field whose name matches one of the patterns below. Arrays are also walked. Non-JSON bodies (URL-encoded forms, plain text, binary previews) are stored unchanged.
// Input (captured by agent)
{
"username": "alice",
"password": "hunter2", ← REDACTED
"card_number": "4111111111...", ← REDACTED
"cvv": "123", ← REDACTED
"amount": 9900 ← preserved
}
// Stored in ClickHouse Hub
{
"username": "alice",
"password": "[REDACTED]",
"card_number": "[REDACTED]",
"cvv": "[REDACTED]",
"amount": 9900
}
Redacted field categories:
password · passwd · token · secret · api_key · access_token · refresh_token · client_secret · auth_token · session_token · private_key · signature
card_number · cvv · cvc · pan · expiry · credit_card · account_number · routing_number · iban · bic · swift
ssn · social_security · passport · dob · date_of_birth · national_id · driving_license
aws_secret_access_key · aws_session_token · x-goog-api-key · client_id
Extending the pattern list
Add patterns to agent/crates/parser/src/masking.rs — the lowercase name goes
into either SENSITIVE_HEADERS or SENSITIVE_FIELDS.
No other changes are required; the masking is applied automatically wherever HTTP flows are assembled.
// Add your own field name here (lowercase):
const SENSITIVE_FIELDS: &[&str] = &[
"password",
"token",
"my_custom_secret", // ← add here
// ...
];
Run cargo test -p parser after any change — 31 unit tests cover every built-in
pattern and verify the recursive walk, case-insensitivity, and non-JSON passthrough behaviour.
Performance Telemetry
Every Nexor agent (v0.7+) reports CPU usage, resident memory (RSS), and a cumulative packet-drop counter on each heartbeat (every 30 seconds). The Hub stores the samples in ClickHouse with a 30-day TTL and exposes them via a REST endpoint. The Fleet dashboard renders a live sparkline per agent card so you can verify agent overhead at a glance.
Agents running in pcap or eBPF mode both report identical
telemetry. Older agents that predate v0.7 simply omit the telemetry fields — the Hub treats
missing fields as zero and never writes a agent_perf row for them.
In lab testing at sustained 1 Gbps (mixed HTTP/1.1, HTTP/2, DNS, TLS), the agent uses <1% CPU on a single core and under 200 MB RSS. Run
GET /api/v1/agents/:id/perf?limit=60
against your own fleet to pull the last 60 samples and verify independently.
Perf history API
Retrieve historical performance samples for a specific agent:
GET /api/v1/agents/{agent_id}/perf?limit=60
# Response
{
"agent_id": "4f8a2c1e-...",
"samples": [
{
"ts": "2026-05-04T12:00:00Z",
"cpu_pct": 0.42,
"mem_mb": 124,
"packets_dropped": 0
},
...
]
}
| Parameter | Type | Description |
|---|---|---|
| agent_id | path param | UUID of the agent (from the fleet list) |
| limit | query, int | Number of samples to return, newest first. Default 60, max 1440 (24 h at 1 sample/min) |
| Field | Type | Description |
|---|---|---|
| ts | DateTime64(3) | UTC timestamp of the heartbeat |
| cpu_pct | Float32 | CPU usage of the agent process, normalised to a single core (0–100) |
| mem_mb | UInt64 | Resident set size (RSS) of the agent process in megabytes |
| packets_dropped | UInt64 | Cumulative number of flows dropped due to back-pressure since agent start |
ClickHouse storage
Performance samples are stored in the agent_perf table, which uses a
standard MergeTree engine ordered by (agent_id, ts)
for efficient per-agent range scans. A TTL clause automatically drops rows older than 30 days.
CREATE TABLE IF NOT EXISTS agent_perf (
agent_id String,
ts DateTime64(3, 'UTC') DEFAULT now64(),
cpu_pct Float32 DEFAULT 0,
mem_mb UInt64 DEFAULT 0,
packets_dropped UInt64 DEFAULT 0
) ENGINE = MergeTree()
ORDER BY (agent_id, ts)
TTL ts + INTERVAL 30 DAY;
To query the rolling average CPU across your whole fleet over the last hour:
SELECT
agent_id,
avg(cpu_pct) AS avg_cpu,
max(mem_mb) AS peak_mem_mb,
max(packets_dropped) AS total_drops
FROM agent_perf
WHERE ts >= now() - INTERVAL 1 HOUR
GROUP BY agent_id
ORDER BY avg_cpu DESC;
The Fleet UI fetches the last 10 samples per agent and renders a CPU sparkline inside each agent
card. The sparkline colour shifts from indigo (healthy) to amber (>50%) to red (>80%) to
surface high-load agents at a glance. A non-zero packets_dropped count
surfaces as a warning badge on the card.
Adaptive Sampling
Nexor agents default to metadata-only mode — they capture every connection's headers, timing, DNS queries, TLS handshakes, and protocol attribution, but discard HTTP request/response body content before it is batched or transmitted. This keeps CPU and memory overhead near zero even at multi-gigabit traffic volumes.
Two mechanisms upgrade a session to full capture automatically or on demand:
Automatic error capture — when an HTTP response carries a 4xx or 5xx status code the agent retains the body preview for that session, giving engineers the error payload they need without enabling full capture globally.
Fleet UI toggle — each agent card in the Fleet dashboard has a sampling toggle. Flipping it to Full pushes a config to the Hub; the agent picks it up within 30 seconds without restarting.
Metadata mode is the recommended default for production environments — it is compatible with the PII masking guarantees described in the Privacy & PII Masking section. Full capture mode may expose body content; ensure your data handling policies permit this before enabling it fleet-wide.
Metadata vs Full mode
| Feature | Metadata mode (default) | Full mode |
|---|---|---|
| HTTP method, path, status | ✅ | ✅ |
| Request / response headers | ✅ (PII-masked) | ✅ (PII-masked) |
| Timing & latency | ✅ | ✅ |
| DNS queries & answers | ✅ | ✅ |
| TLS handshake metadata | ✅ | ✅ |
| eBPF process attribution | ✅ | ✅ |
| Request body preview | ❌ (stripped) | ✅ |
| Response body preview | ❌ unless 4xx/5xx | ✅ |
| Typical agent CPU overhead | <1% at 1 Gbps | ~2–5% at 1 Gbps |
Sampling API
Read or update the sampling mode for a specific agent via the REST API. Changes are picked up by the agent on its next config poll (within 30 seconds).
GET /api/v1/agents/{agent_id}/sampling
# Response
{"agent_id": "4f8a2c1e-...", "mode": "metadata"}
POST /api/v1/agents/{agent_id}/sampling
Content-Type: application/json
X-Api-Key: <admin-key>
{"mode": "full"}
# Response
{
"ok": true,
"agent_id": "4f8a2c1e-...",
"mode": "full",
"pushed_at": "2026-05-05T12:00:00Z"
}
Valid values for mode are metadata and
full. Any other value returns HTTP 400.
The config is delivered via the existing agent_configs ClickHouse table
and acknowledged by the agent using the same poll/ack mechanism as other remote config pushes.
Incident Replay Timeline
When an anomaly fires, click Replay to pull every flow captured in a ±5-minute window around the event and render them as a parallel protocol lane timeline. This gives you a full picture of what was happening on the wire at the moment of the incident — without having to write a ClickHouse query.
Trigger
Click Replay on any row in the Anomalies page. The timeline opens pre-loaded, centered on the anomaly timestamp.
Protocol lanes
Flows are sorted into five parallel lanes: HTTP (blue), HTTP/2 & gRPC (indigo), DNS (magenta), TLS (cyan), TCP/UDP (slate).
Red trigger line
A red vertical line marks the exact anomaly timestamp across all lanes, so you can spot activity that coincided with the event.
Flow detail panel
Click any flow block to expand a detail panel showing src/dst, bytes, duration, and flow info without leaving the timeline.
Replay API
The Hub exposes a single endpoint that the timeline UI calls. You can also query it directly for scripting or integration with external incident management tools.
GET /api/v1/replay
?agent_id=4f8a2c1e-...
&around=2026-05-05T14:32:00Z # RFC3339 incident timestamp
&window_mins=5 # ±minutes — default 5, max 30
# Response
{
"agent_id": "4f8a2c1e-...",
"hostname": "prod-web-01",
"around": "2026-05-05T14:32:00Z",
"window_mins": 5,
"from": "2026-05-05T14:27:00Z",
"to": "2026-05-05T14:37:00Z",
"total": 247,
"flows": [
{
"id": "d1e2f3a4-...",
"agent_id": "4f8a2c1e-...",
"hostname": "prod-web-01",
"timestamp": "2026-05-05T14:27:04.182Z",
"protocol": "HTTP",
"src_ip": "10.0.0.5",
"src_port": 58432,
"dst_ip": "10.0.0.1",
"dst_port": 80,
"bytes_in": 1248,
"bytes_out": 8192,
"duration_ms": 12,
"info": "GET /api/health → 200"
}
// … up to 500 flows, sorted ASC by timestamp
]
}
Timeline UI
The replay page is served at /replay and accepts the same query params
as the API endpoint. You can link directly to a replay from your own alerting tools:
https://your-hub/replay ?agent_id=4f8a2c1e-... &around=2026-05-05T14%3A32%3A00Z &hostname=prod-web-01 &window_mins=10
The hostname parameter is display-only and optional.
The window_mins parameter accepts integers from 1 to 30.
Flows are fetched client-side from the Hub API via the Next.js proxy, so no additional
CORS configuration is required.
Natural Language Flow Search
A plain-English search bar lives at the top of the Flows page. Type a question — the AI translates it into protocol, IP, and time filters and applies them directly to the flow table. No modal, no query language, no copy-paste.
show me DNS failures in the last 30 minutes outbound connections to port 443 from 10.0.0.5 TLS flows to external IPs today HTTP errors on prod-web-01 in the last hour
Relative time phrases ("last hour", "today", "yesterday") are resolved against the current UTC clock at query time. The AI's interpretation is shown as an explanation chip below the search bar — hit × to reset all filters.
Search API
The search bar calls POST /api/v1/copilot/search directly.
You can call it from scripts to build automated filter pipelines.
POST /api/v1/copilot/search
Content-Type: application/json
X-Api-Key: <key>
{"query": "DNS failures in the last 30 minutes"}
# Response
{
"filters": {
"protocol": "DNS",
"from": "2026-05-05T13:30:00Z",
"to": "2026-05-05T14:00:00Z"
},
"explanation": "DNS flows from the last 30 minutes"
}
Requires ANTHROPIC_API_KEY to be set on the Hub.
Returns HTTP 503 when the key is absent.
Passive API Inventory
Nexor auto-discovers every HTTP, HTTP/2, and gRPC endpoint called across your fleet by analysing observed traffic — no instrumentation, no code changes, no service registry required. The API Inventory page (sidebar) gives you a live registry of your internal APIs with latency, error rate, and call volume.
Endpoints marked New today were first seen within the last 24 hours — useful for detecting shadow APIs or unexpected new service integrations.
Inventory API
GET /api/v1/inventory/endpoints?window=24h&hostname=prod-web-01
# Response
{
"window": "24h",
"total": 42,
"endpoints": [
{
"hostname": "prod-web-01",
"protocol": "HTTP",
"method": "POST",
"path": "/api/v1/payments",
"call_count": 1248,
"error_count": 12,
"error_rate": 0.96,
"p95_ms": 84.3,
"avg_ms": 31.2,
"agent_count": 2,
"first_seen": "2026-05-04T08:12:00Z",
"last_seen": "2026-05-05T13:58:00Z",
"is_new": false
}
]
}
Query parameters: window (1h/6h/24h/7d), hostname (exact match), search (path substring).
Slack & Teams Rich Alert Threads
Alert notifications now carry enough context to start triaging without opening the Hub. Slack uses Block Kit; Teams uses Adaptive Cards 1.4.
Every alert includes
- Rule name, metric, value & threshold
- Timestamp (UTC)
- Up to 5 recent flows inline
- "View in Nexor" link
- "▶ Replay Incident" deep link
Resolve endpoint
Record a resolution with an optional note for audit purposes:
POST /api/v1/alerts/{id}/resolve
{"note": "Restarted the payment service"}
One-Command Docker Compose
The entire Nexor stack — ClickHouse, Hub API, and the web dashboard — starts with a single command. No Kubernetes, no manual service wiring.
git clone https://github.com/Libinm264/nexor.git cd nexor # Fill in secrets — at minimum CLICKHOUSE_PASSWORD and API_KEY cp .env.example .env $EDITOR .env docker compose up -d # Dashboard → http://localhost:3000 # Hub API → http://localhost:8080
Services start in dependency order: ClickHouse must pass /ping
before the Hub API starts; the Hub API must pass /health before
the web UI starts. A clean first boot takes under 60 seconds on any machine with
Docker installed.
After the stack is up, open the Fleet page in the dashboard to get a one-line agent install command with a short-lived enrollment token — no admin API key is exposed to the enrolled machine.
See .env.example at the repository root for the full list
of environment variables, including optional SMTP, Anthropic API key, and port overrides.