Frequently Asked Questions

What teams ask before deploying the platform.

Covering positioning, supported workflows, technical integration, security and deployment, and commercial terms - in one place.

Results

All questions

It helps security teams move earlier when a new vulnerability appears and the information is still thin. You can start from a short advisory, a write-up, or other research material, and the platform turns that into synthetic telemetry, candidate detections, and validation-ready outputs that your team can review and push downstream.

No. The product is intentionally positioned before the SIEM. It helps teams generate and validate better detection content faster, but the final review, tuning, and production deployment still happen in the customer environment.

The best fit is SOC teams, MDR providers, internal detection engineering teams, and security consultants who need to create or validate detections quickly when new vulnerability information lands. It is especially useful for teams that cannot wait for perfect telemetry before starting coverage work.

The current product supports four core workflows: article to detections, source to synthetic logs, logs to detections, and query validation. That means the platform can start from sparse research, from telemetry you already have, or from an existing detection that still needs proof.

Because the biggest early-stage problem is usually missing evidence, not missing ideas. Synthetic telemetry gives teams something concrete to test against while the patch is not public, the exploit chain is still being understood, or the environment has not yet produced enough real-world signal to validate coverage.

No, and that is not the product promise. The value is faster detection engineering with evidence, review, and validation context built in. It should shorten the path to a good detection, not remove analyst judgment from the process.

Because the costly part is the full hosted workflow: ingesting research, generating synthetic telemetry, and compiling useful detections. Credit-based pricing keeps the commercial model aligned to actual workload instead of pretending all requests cost the same.

That is exactly why the plans ladder upward. Small teams can start with a light entry plan, while heavier users can move to larger tiers or enterprise terms. That keeps the entry point accessible without forcing light users to subsidize heavy automation.

The hosted platform gives teams a managed workspace for generation, review, billing, collaboration, and day-to-day detection operations. Enterprise deployments are for organisations that need dedicated infrastructure, custom data-handling controls, procurement support, or private environment requirements beyond the standard hosted service.

Trials are reviewed manually. If approved, your team receives an invitation to create an account and activate a hosted trial workspace. That keeps the trial aligned with real buyer intent and protects the product from becoming an anonymous abuse surface.

A direct AI prompt gives you a rule draft with no grounding, no validation, and no evidence. The platform runs a structured pipeline: it extracts the attack narrative, plans what telemetry should exist, generates synthetic logs as evidence, derives detection logic anchored to that evidence, compiles to your target backend, and validates the output against the generated telemetry. The result is a traceable artifact package - not a one-shot guess - with quality scores, portability notes, and a clear audit trail from source material to deployed rule.

Yes. The attack modeling stage extracts inferred technique and sub-technique IDs from the source material and attaches them to the run context. Tactics, techniques, and CVE or CWE references appear in the normalized attack context alongside each run. This makes it easier to reason about coverage posture and connect generated detections to your existing ATT&CK-aligned detection program.

Yes. The detection build workflow is built exactly for post-incident or red team scenarios: supply observed or simulated logs, and the platform derives detection logic grounded in what you actually saw. Query validation is similarly useful during an IR engagement to quickly check whether your existing rules would have caught the observed behavior. Proactive coverage from advisories is the most common use case, but the platform supports evidence-driven detection work at any stage.

The platform accepts JSONL event bundles, raw syslog lines, CEF-formatted lines, and similar newline-delimited text. You paste or upload the content as plain text. The pipeline parses the format, normalizes field names, and uses the result as the evidence base for detection derivation or query validation. Mixed-format inputs are handled; the pipeline accepts partial schemas and works with whatever field coverage is present.

No. The validation is semantic and field-presence based, not live execution. The pipeline checks whether the fields your query references actually appear in the telemetry, whether the logical conditions match the observed events, and whether the backend syntax is valid for the declared target. It produces a verdict of full pass, partial pass, fail, or inconclusive - each backed by specific evidence. This gives you a meaningful signal without needing a live SIEM environment, though the recommendation is always to test in a controlled environment before production promotion.

The platform makes translation losses explicit. Every compiled backend query includes a portability assessment that classifies the translation as direct, approximate, or degraded - and states exactly which constructs could not be represented faithfully. If an operator, join, or sequence construct has no equivalent in the target backend, the compiled query notes this rather than silently substituting something incorrect. The goal is never to produce a query that looks correct but behaves differently than intended.

Credits measure workload, not just time. Article to detections costs 1.0 credit and covers the full pipeline: source ingestion, attack modeling, synthetic telemetry, Sigma generation, compiled queries, and validation. Source to synthetic logs costs 0.6 credit. Logs to detection costs 0.5 credit. Query validation, the lightest workflow, costs 0.3 credit. Failed runs and cancelled runs do not consume credits. The credit model reflects the actual compute and pipeline cost of each workflow rather than charging a flat rate regardless of what the platform did.

Most runs complete in two to five minutes depending on the workflow mode and the length of the source material. The full pipeline on a long threat report takes longer than a query validation run on a short log sample. Progress is streamed in real time so you can watch the pipeline move through stages rather than waiting on an opaque spinner. If a run fails due to a transient provider issue, automatic retries are built in.

Yes, and that is the intended workflow for iterative detection engineering. You can clone any completed run to pre-fill the New Run form with the same source URL, text, or logs, then change the analyst note, backend targets, or workflow mode before resubmitting. Refinement is also available directly from a completed run - it re-runs the same source with an updated prompt focus without starting from scratch.

The analyst note is a free-text scope instruction that directly influences the pipeline. Use it to narrow the attack phase - for example, "focus on persistence and lateral movement only" - to constrain platform assumptions like "Windows endpoints, Sysmon logs", or to direct the backend compiler like "prefer Elastic EQL sequence syntax". A precise analyst note consistently produces tighter, more actionable output than the same source run without guidance. It is optional but meaningful for complex sources.

The platform compiles Sigma rules into backend-specific queries for Splunk SPL, Microsoft Sentinel KQL, Elastic EQL, Elastic KQL, Elastic ES|QL, Google SecOps YARA-L 2.0, Palo Alto Cortex XDR XQL, CrowdStrike Falcon LogScale CQL, Sumo Logic Search Query, and IBM QRadar AQL. You can select up to two backends per run. Sigma rules are always produced as the portable canonical layer regardless of which backends you choose, so switching platforms later does not require regenerating from scratch.

Sigma is an open, vendor-neutral detection rule format supported by most major SIEMs and detection platforms. Writing in Sigma first means the detection intent is captured once in a portable, auditable format and then compiled to any target backend - rather than writing the same rule four times in four different query languages. It also makes rules easier to share, version-control, and contribute to community repositories. The platform treats Sigma as the canonical internal representation and derives all backend-specific queries from it.

Yes, on enterprise plans. The platform supports configuring a custom model provider endpoint, base URL, auth token, and proxy settings so the pipeline runs against your own model infrastructure instead of the default hosted provider. This is the right path for organizations with data residency requirements, existing model procurement agreements, or the need to run against an air-gapped or on-premises LLM deployment.

Yes. The external API uses HMAC-SHA256 signed requests, which means each API key carries a deterministic, auditable signature - no bearer tokens that can be replayed without the secret. All four workflow modes, the full run lifecycle, saved detections, and PDF exports are accessible through the API. Pro and Team plans include API access. Keys are scoped to the issuing user's role: analyst keys are read-only, manager and admin keys can submit runs. Full documentation with Python, Node.js, and cURL examples is available in the product.

Both. Sigma rules can be downloaded as a YAML file directly from the run review screen. Compiled backend queries can be downloaded as backend-specific files - SPL, KQL, EQL, and so on. A structured PDF report covering the full artifact package is generated on demand for any completed run or saved detection. The external API also exposes these artifacts programmatically if you need to pull them into a CI/CD pipeline or detection library.

Submitted content - URLs, pasted text, and log samples - is used to execute the requested pipeline workflow and is stored attached to the resulting run record in your workspace. It is not used to train models, shared across organizations, or retained beyond your workspace retention window. The full data handling policy is documented in the Privacy Policy. For organizations with strict data classification requirements, the enterprise deployment option keeps all submitted content inside your own infrastructure.

Yes. Enterprise plans can support private deployment models, including customer-managed cloud or dedicated infrastructure arrangements, with custom quotas, governance controls, and rollout support. This path is appropriate for organizations in regulated industries, those with strict data residency requirements, or those that cannot route threat research through a shared hosted service.

URL ingestion is performed by the platform's hosted infrastructure, not from the submitting user's IP address or network. This is intentional: it decouples your internal network from outbound requests to external research sources and keeps the fetch behavior consistent across users. If your environment requires that all outbound requests originate from a known IP range - for example, to access an internal threat intel portal - use the text paste input instead of the URL input.

Yes on both. Saved detections support inline comments, review status assignment (pending review, approved, deferred), a full revision history with per-change diffs, and rollback to any previous version. Every change records who made it and when. Share links with controlled expiry and access restrictions let analysts collaborate with external reviewers without granting full workspace access. This review and governance layer is what separates a generated detection candidate from a promoted, owned detection artifact.

Yes on enterprise deployments. Authentication can be integrated with your identity provider so user lifecycle, MFA requirements, and sign-in policies stay under your control. Workspace roles limit who can submit runs, review detections, manage billing, or administer deployment settings. That reduces the operational risk of sharing a detection engineering platform across multiple analysts and teams.

Research foundation

Curious how the pipeline actually works under the hood?

The detection engine is grounded in seven peer-reviewed papers. From TTP extraction precision of 97% to Sigma self-correction loops that cut error cycles from 12 to 3, every algorithmic choice has a published basis.

Read the research foundation