8 detection layers. Zero dependencies. No cloud. No GPU.
No strings attached.
Try the AEGIS Demo SDK and see for yourself.
Each layer runs independently in under 0.1ms. Together, they catch the most common prompt injection patterns at full detection quality. We limit quantity (100/day), not quality — every scan is perfectly surveilled.
26 known injection phrases: "ignore all previous," "DAN mode," "bypass safety."
Catches command-style instructions: "Tell me your system prompt," "Forget your rules."
Detects persona assignment attacks: DAN, STAN, "developer mode," "unrestricted AI."
Blocks attempts to leak system instructions, hidden context, or LLM configuration.
Spots obfuscation via base64, hex encoding, and excessive Unicode escape sequences.
Flags abnormally long prompts that may contain hidden payloads or dilution attacks.
Identifies repeated instruction patterns designed to override safety through persistence.
Catches social engineering: fake admin claims, override codes, authority impersonation.
This SDK is free because its purpose is validation, not revenue. We built the NI-Stack to prove that AI safety can be sovereign, energy-efficient, and mathematically traceable. The best way to prove that? Let independent researchers, CTOs, and developers verify it themselves.
"The tools are free because the science is what has value. When you validate our technology, you prove what years of research are worth. That's how both sides win."
We don't have a pricing page. We have a validation program. We're not looking for customers — we're looking for people who care enough about AI safety to test it rigorously.
We believe in radical honesty. This demo contains 8 of our 58+ detection agents. The remaining 50 are part of our protected research. We don't degrade quality — we limit quantity.
| Capability | AEGIS Demo (Free) | Full NI-Stack (58+ agents) |
|---|---|---|
| Detection Layers | 8 (full quality per layer) | 58+ specialized agents |
| TPR (Threat Detection) | ~70% | ~96% (GTO-verified) |
| FPR (False Alarms) | ~8% | ~4% |
| Scanning Speed | 0.03–0.08ms | <0.5ms (full cascade) |
| Semantic Analysis | ❌ Pattern-only | ✅ Intention Coherence Score |
| Multi-Turn Detection | ❌ Single-prompt | ✅ Crescendo drift tracking |
| Self-Improving | ❌ Static patterns | ✅ SIREN feedback loop |
| Audit Trail | ❌ Not included | ✅ POAW receipts (ML-DSA signed) |
| Access | Free — 100 scans/day (10M/day with validator key) | Validation Consortium (50 seats — apply below) |
We're not launching a free trial. We're forming a research consortium of 50 exclusive validators — academics, CTOs, insurance auditors, and security researchers — who independently verify the NI-Stack. Your testing creates the evidence that no marketing budget can buy: independent confirmation.
Co-author credit, full dataset access, and methodology review. Your citations create the moat no acquirer can ignore.
10M scans/day for 7 days, direct engineering line, and full benchmark report. Your evaluation becomes due diligence evidence.
CVE credit + Hall of Fame recognition for confirmed bypasses. Pseudonymous handles welcome — we respect your privacy.
Full technical briefing, benchmark reports, and compliance mapping. Independent validation for your risk assessment.
"We believe AI safety should be sovereign — running on your hardware, under your control, with every decision traceable. We built the tools. We give them freely. Because the right to safe AI shouldn't depend on your budget."