Strategic Direction

The Threaxis Platform

Continuous adversarial exposure validation — where human attack intelligence compounds into AI through an economic model that incentivises the world's best operators to contribute discovery to a global intelligence engine.

Launch Live Mockup →
01 — The Organising Principle

Crown Jewels as Centre of Gravity

Traditional vulnerability management treats all assets equally. Threaxis inverts this entirely: everything is organised around the assets that matter most to the business — the crown jewels. Every attack path, every finding, every remediation decision flows from the question: "Can an attacker reach what we cannot afford to lose?"

🛡 Protection Status Heatmap

The Dashboard centrepiece is the Crown Jewel Protection Status — a heatmap showing each critical asset across five attack dimensions: external reachability, lateral path existence, privilege escalation, data exfiltration, and active validated paths. At a glance, a CISO sees which jewels are exposed.

Dashboard

🎯 Business-Anchored Metrics

The headline metric isn't "vulnerabilities found" — it's 14 of 17 crown jewels protected. This reframes security posture in language the board understands: protection of business value, not compliance checkboxes. The 3 unprotected jewels drive every operational priority.

Strategic Outcome
Why this matters: When a CISO presents to the board, they can now say "We have validated protection of 14 of our 17 most critical assets, and here are the 3 we're working on" — rather than "We found 2,847 vulnerabilities last quarter."

→ View Dashboard


02 — Validated Attack Intelligence

Attack Path Validation

Threaxis doesn't just scan for vulnerabilities — it validates complete attack paths from external entry point to crown jewel. Each path is a proven chain of exploitation, mapped to MITRE ATT&CK, with a concrete business impact assessment attached.

Consider the critical path: External → Jenkins → Treasury DB. This isn't a theoretical risk — it's a 7-step validated kill chain, autonomously discovered in 4.2 hours, that proves an attacker can move from public internet reconnaissance through an exposed Jenkins instance, harvest credentials, move laterally through the corporate VLAN, abuse Kerberos delegation, and ultimately reach the Treasury Database containing 10.2 million customer PII records. Estimated financial impact: $45–120M.

This is the difference between a vulnerability report and an evidence-grade threat narrative.

⚠️ Business Impact Assessment

Every attack path carries a dollar-denominated impact estimate and regulatory context. This isn't security talking to security — it's security talking the language of enterprise risk.

Strategic Concept

⚡ Leverage Fix

The platform identifies that hardening the Jenkins instance (a single fix) collapses this path and 2 additional paths — eliminating 13% of all critical paths. This is leverage remediation: fix once, collapse many.

Leverage Remediation

→ View Attack Paths


03 — The Core Innovation

The Closed Loop

The strategic centrepiece of Threaxis is the closed loop — a continuous cycle where every finding is not just discovered but evidenced, remediated, re-validated, and then converted into a permanent regression test. Nothing falls through the cracks. Nothing stays "fixed" without proof.

Context & Scope
Validation
Evidence
Remediation
Re-validation
Regression Test

The Finding Lifecycle stepper makes this loop visible. A finding is tracked through every stage — from discovery through evidence capture, remediation assignment, fix deployment, re-validation (did the fix actually work?), and finally to an active regression test that continuously ensures the vulnerability doesn't return.

Each step links to the relevant page: View Path goes to Attack Paths, View Evidence goes to the Evidence store, View Status goes to Remediation, View Proof shows the re-validation result, and View Test links to the regression playbook. The loop is not just conceptual — it's navigable.

The strategic implication: Over time, the platform accumulates a growing library of regression tests. Each resolved vulnerability becomes a permanent sensor. The attack surface doesn't just shrink — it becomes instrumented.

04 — Human-Governed Autonomy

Three Modes of Orchestration

Autonomy without governance is reckless. Pure manual control doesn't scale. Threaxis offers three orchestration modes, each calibrated to the trust level and risk tolerance of the engagement.

Fully Autonomous

External scanning and continuous validation run without human intervention. Agents execute within guardrails, blocked from out-of-scope networks, rate-limited, and logged to the audit trail. Ideal for non-production and external attack surface coverage.

Approval-Gated

The default for production environments. Agents discover and plan, but critical exploitation attempts pause and await CISO approval. The Approval Queue surfaces these decisions with full context and wait times.

Human-Led

Certified operators drive the engagement directly — used for red team operations against the most sensitive crown jewels. The platform provides infrastructure and AI co-piloting; humans provide judgment and creative exploitation.

The Live Agent Activity Feed shows this governance in action: an agent in the PCI Zone is awaiting approval for a privilege escalation attempt, whilst another test completed successfully and the system blocked an out-of-scope access attempt. This is what trust looks like in autonomous security — visible, auditable, and under the CISO's thumb.

→ View Orchestration


05 — The Capability Ecosystem

Arsenal: The Capability Marketplace

The Arsenal is where Threaxis transforms from a product into a platform. It's a governed marketplace of attack modules — contributed by partner firms, customer teams, independent operators, and AI agents — vetted through a rigorous quality pipeline before entering the shared capability set.

892
Total Modules
247
Operator-Contributed
89
Partner-Contributed
7
Pending Review

The Contribution Pipeline is the critical innovation: every new module passes through six stages — Submit, Auto Test, Peer Review, Safety Vet, Approved, Published. A partner firm's SAP exploitation module and an independent operator's Kerberos delegation abuse chain go through the same quality gate as internal tooling.

The flywheel effect is visible in the data: Sentinel Cyber (a partner firm) contributed a module that's already Published. An independent operator submitted a novel OAuth technique currently in Peer Review. Meanwhile, an autonomous agent submitted a DNS zone transfer module that's in Auto Test. Four sources — partner firms, customer teams, independent operators, and AI agents — all contributing through the same quality gate.

→ View Arsenal


06 — The Three-Ring Operator Model

Operators: The Human Layer

Three types of human operators work alongside AI agents. Each ring has its own economic model and activates in sequence — solving the cold-start problem that kills most marketplace platforms.

💼
Ring 1 — Partner Operators

Boutique Pentesting Firms

Bring certified staff onto the platform. Dual role: channel (recommend to clients) and supplier (contribute techniques through normal service delivery). Professional services revenue + channel margins + optional royalties.

👥
Ring 2 — Customer Teams

Internal Red Teams & Pentesters

Enterprise customers bring their own operators via BYOB (Bring Your Own Bench). No royalties — they benefit from a unified platform. Their work feeds the intelligence flywheel automatically.

🌐
Ring 3 — Independent Operators

Freelance Security Experts

Contribute Attack Modules and earn Attack Royalties — severity-weighted payouts every time a module validates an exposure across any customer. The third ring to ignite, not the first.

Attack Royalties: Operators contribute once, earn continuously. 15–20% of platform subscription revenue is allocated to the Operator Royalty Pool. A critical finding pays 3–5x more than a medium finding. Customer never pays a bounty — Threaxis pays operators from SaaS revenue. This resolves the fundamental IP tension that makes bug bounty economics untenable in an AI-augmented world.

The Operator Bench in the platform shows all three rings in action: partner firms with certified staff and active engagements, customer internal teams with their own operators running assessments, and independent operators contributing modules and earning royalties. The Operator Console gives each type a unified experience with AI co-piloting, automated reconnaissance, evidence generation, and reporting.

→ View Operator Console


07 — Fix Once, Collapse Many

Leverage Remediation

Not all fixes are equal. Threaxis surfaces the fixes that deliver disproportionate impact — the single patch that collapses multiple attack paths, the one configuration change that protects three crown jewels simultaneously.

The Remediation page operates as a Kanban board with four columns: Awaiting Fix, Fix In Progress, Awaiting Re-validation, and Validated Closed. Each card carries its SLA countdown (with overdue items highlighted in red), engineering ticket reference, assigned team, and direct links to Evidence and Regression Test.

The re-validation column is the strategic differentiator. A finding isn't "closed" because someone says they deployed a fix — it's closed because the platform re-tested the attack path and confirmed the fix works. The "Re-validation: Pass" badge on validated cards is proof, not promise.

87%
Re-validation Pass Rate
3.2d
Mean Time to Remediate
1
SLA Overdue

→ View Remediation


08 — Protected While You Fix

SOC Detection Bridge

The gap between “exposure found” and “fix deployed” is days to weeks. Threaxis closes it instantly — auto-generating detection rules from validated attack paths and pushing them straight into the customer’s SIEM.

🛡 Interim Detections

The moment Threaxis validates an attack path, it generates Sigma-format detection rules covering every step of the kill chain. These are transpiled to the customer’s SIEM format — Splunk SPL, Microsoft Sentinel KQL, Chronicle YARA-L, or Elastic EQL — and pushed automatically. The customer is protected from the moment of discovery, not the moment of fix.

Immediate Risk Reduction

🔄 Permanent Regression Monitors

When a fix is deployed and re-validation confirms it works, interim detections retire automatically. But they don’t disappear — they convert into permanent regression monitors that run continuously in the customer’s SOC. Every closed finding becomes a detection. The defence posture compounds alongside the offensive intelligence.

Compounding Defence
This opens the defence budget. A CISO hearing “we instrument your SOC with detections derived from proven kill chains” reaches for the SOC operations budget, not the pentest budget. Detection content compounds across customers — a technique validated at one customer generates reusable detection patterns for everyone. Offensive testing and defensive detection engineering, from a single platform.
4
SIEM Formats Supported
<5m
Finding to Detection
2
Detection Lifecycles

09 — System of Record

Evidence-Grade Outputs

Every finding in the platform produces forensic-quality evidence with chain of custody, cryptographic integrity verification, and full provenance tracking. This isn't a vulnerability scanner — it's an evidence system.

🔒 Chain of Custody

Each evidence artefact tracks its complete provenance: an autonomous agent captured it, the Evidence Vault sealed and hashed it, and a reviewer signed off. Three steps. Full accountability. Court-grade integrity.

Forensic Integrity

✅ Integrity Verification

2,847 artefacts. 100% integrity verified. Every hash is valid, every provenance chain is complete. "Verify All Hashes" lets auditors confirm this at any time. This is evidence that stands up to scrutiny.

Audit-Ready
The "Replay Attack" button is a powerful concept: at any point, you can re-execute the original exploitation proof to verify it still works — or confirm that remediation has closed the path. Evidence isn't static; it's replayable.

→ View Evidence


10 — Trust Through Transparency

Governance & the Kill Switch

Autonomous offensive security requires exceptional governance. The Governance page provides the controls that make the entire model trustworthy: an emergency kill switch, configurable policies, scope controls, and a comprehensive audit log.

🛑 Emergency Kill Switch

One button halts all agent activity across all engagements. Armed but not activated — the ultimate safety net for when something unexpected happens. This is the governance primitive that makes everything else possible.

Governance

📜 Audit Log

Every action is logged: a CISO approved a privilege escalation, an agent was automatically paused outside the testing window, a scope boundary was updated. Nothing happens in the dark.

Transparency

The Scope Controls with Include/Exclude toggles define the boundaries of what agents can touch. HR Network and Legal Network are excluded globally. The PCI Zone requires approval-gated mode. These aren't suggestions — the system enforces them, as the audit log proves when agents are blocked from accessing out-of-scope subnets.

→ View Governance


11 — The Endgame

The Compounding Intelligence Flywheel

Threaxis isn't just a product — it's a platform with network effects. Four participant groups contribute to an ever-expanding capability set, where every engagement makes the next one more effective. The moat is not the model layer — it is compounding attack intelligence.

💼

Partner Operators

Extend capability, contribute techniques through service delivery

👥

Customer Teams

Run internal assessments, feed intelligence automatically

🌐

Independent Operators

Discover novel attacks, earn Attack Royalties from the platform

🤖

Autonomous Agents

Continuous scanning, module creation, regression testing

The flywheel is visible across the platform: a partner firm (Sentinel Cyber) discovers a novel attack technique through a customer engagement. It enters the Arsenal through the Contribution Pipeline. Once published, it becomes available to operators and autonomous agents across all customer engagements. When a customer remediates the finding, it becomes a regression test that runs continuously. The next customer with a similar configuration gets tested automatically.

Every engagement feeds the next. Every fix creates a sensor. Every contribution expands the platform's reach. The gap between Threaxis and competitors widens with every engagement, every contribution, every fix.

The four customer outcomes this enables:

1. Launch & Release Assurance — Validate security posture before critical business events
2. Continuous Exposure Management — Ongoing crown jewel protection with evidence
3. Controls Analysis & Validation — Prove that security controls actually work under real attack conditions
4. Automated Detection Engineering — Interim and permanent SOC detections from validated attack paths, bridging offence and defence