Test your own prompts against the AEGIS cascade and watch them fly through all 58+ agents in real-time.
If you opt in, we collect only aggregate statistics โ never your prompt content:
This uses Differential Privacy (ฮต=1.0, ฮด=10โปโต) โ even aggregate stats are noise-injected so individual prompts cannot be reconstructed. Patent Claims 1840-1862 (GTO Architecture).
| Dimension | Black Box | NI-Stack |
|---|---|---|
| Traceability | โ None | โ POAW Chain |
| Audit Trail | โ Opaque | โ ML-DSA Signed |
| Safety Layers | 1 (LLM) | 114 Agents |
| Energy | 0.2 Wh/prompt | < 0.001 Wh |
| ISO 42001 | โ | โ Ready |
| EU AI Act | โ ๏ธ Partial | โ Compliant |
| Insurance | โ Uninsurable | โ Attestable |
| Latency | ~500ms | < 0.5ms |
| GPU Wh per safety prompt | 0.2 Wh | Midpoint of 0.17โ0.43 Wh range for GPT-4o text queries | ๐ Epoch AI 2025 โ |
| Grid carbon intensity | 0.473 kg COโe/kWh | Global average 2024 (our model uses conservative 0.4) | ๐ Ember 2025 โ |
| Tree COโ absorption | 22 kg COโ/year | Average mature tree annual sequestration | ๐ US EPA โ |
| Global AI prompts/day | 5 billion | ChatGPT โ2.5B + Gemini, Claude, Copilot, others โ2.5B | ๐ DemandSage 2025 โ |
Current industry practice: Every prompt to an LLM requires a GPU-powered safety classifier (e.g., OpenAI Moderation API, Llama Guard, Azure Content Safety). These classifiers run deep neural networks on GPU hardware, consuming 0.2โ0.4 Wh per inference.
The NI-Stack replaces this with a 108-agent CPU-only cascade that processes prompts in 0.18ms using pattern matching, statistical analysis, and linguistic heuristics โ consuming <0.0001 Wh per prompt. That's a 2,000ร energy reduction.
Only ambiguous prompts (cumT 0.33โ0.43, ~2% of traffic) are routed to a lightweight NPU model via DirectML. The remaining 98% never touch a GPU.
Every prompt scanned generates a Proof of Audited Work (POAW) receipt โ a cryptographically signed (ML-DSA post-quantum) attestation of which agents processed the prompt, what thresholds were used, and what decision was made.
Receipts are hash-chain linked (each receipt references the previous), creating an immutable audit trail. This enables continuous monitoring for insurance attestation similar to Munich Re aiSureโข.
Attestation status progresses from ๐ Collecting โ ๐ Under Review โ ๐ Evaluating โ โ Attestable as the chain grows.
{}
aegis.scan(text, sessionId) โ
{ decision, layerResults, cumulativeThreat }
| Architecture / Cost | Latency | Throughput |
|---|---|---|
|
Traditional Safety LLM Cloud GPUs ($$$) |
~600ms โ 1.5s | Very Low |
|
Enterprise Endpoints Cloud API ($$) |
~350ms โ 700ms | Rate Limited |
|
NI-Stack V76 (1 Core) Local Laptop |
~22.0 ms | 45 p/s |
|
NI-Stack V77 (8+ Cores) Local Laptop (AC Power) |
~0.3 ms | 3,000+ p/s |
|
NI-Stack V77 (Throttled) Local Laptop (Battery) |
~0.6 ms | 1,500+ p/s |
The NI Stack Flythrough utilizes rich scientific and metaphysical animations mapped to corresponding Solfeggio frequencies. Toggle them OFF to drop rendering density to 3% for sustained tracking clarity.