Behaviour & Risk Evidence

Validation that focuses on behaviour, not just returns.

Dovest’s validation stack is built to show how the engine behaves under stress, not just how it performed in one favourable sample. We track what the engine was allowed to do, what it actually did, and how that behaviour lines up with its stated envelope.

This page outlines how track-records are constructed, what a “six-month audit” really proves, and which tests we consider non-negotiable before capital is scaled.

Validation Stack Overview

Layer 1
Data & hygiene

Feed integrity, survivorship-bias controls, and corporate-action handling audited before any strategy-level test is trusted.

Question: “Are we measuring reality?”
Layer 2
Backtest discipline

Out-of-sample protocols, parameter freezes, and regime-aware testing that prevent the engine from being fit to noise.

Question: “Did we over-fit the sample?”
Layer 3
Live & shadow runs

Paper and low-risk capital runs that compare live behaviour against the validated envelope in real microstructure.

Question: “Does it behave the same live?”
Layer 4
Independent audit

Third-party style review of track-records, policies, and monitoring to confirm that the story matches the evidence.

Question: “Would a sceptical allocator agree?”
Track-record construction
Showing what was allowed, not just what was traded.

A Dovest track-record is not just a line of P&L. It specifies the engine configuration, the rules that governed it, and the set of opportunities it was allowed to take but chose not to.

Audit trail

Policy-anchored performance

Each period of performance is tied to a frozen ruleset, risk envelope, and deployment scope, so behaviour is explained by design, not by ad-hoc overrides.

Cohorts

Engine cohorts instead of products

Track-records are grouped by engine identity and environment, so allocators see a stable behaviour profile instead of a marketing share class.

Opportunity set

Hit-rate vs allowed signals

We record how many safe opportunities were presented by the signal stack vs how many were actually traded, separating engine design from capital constraints.

Behaviour under stress
Proving how the engine behaves when assumptions break.

Stress validation focuses on the engine’s decisions during volatility spikes, gaps, and liquidity shocks. The goal is not to avoid all pain, but to show that losses remain inside a pre-agreed envelope.

Event windows

Shock-day notebooks

For major stress days, we produce a narrative: what the engine saw, what it was allowed to do, and which safeguards activated.

Regimes

Regime-aware stats

Drawdowns, win-rates, and risk metrics are broken down by volatility and liquidity regimes instead of averaged into a single number.

Drift

Behaviour drift flags

Quantitative thresholds highlight when live behaviour diverges from the validated pattern, even if headline returns still look acceptable.

Risk envelope & limits
Validation as a contract around maximum pain.

Validation defines the conditions under which the engine is expected to stay inside its risk envelope — and what happens if it does not. This gives allocators a concrete contract around maximum pain, not just an expected return.

Loss profile

Max-pain scenarios

Scenario analysis showing how fast losses can accumulate under clustered gaps, liquidity withdrawal, or extended hostile regimes.

Capacity

Capacity-aware testing

Scaling tests that show how slippage, turnover, and crowdedness evolve as capital grows across venues and universes.

Policy

Halt & de-risk rules

Pre-defined policies for de-risking or pausing the engine when drawdown, slippage, or behaviour drift cross agreed thresholds.

Monitoring pipelines

Live monitoring tracks both performance and behaviour features so early warning signals appear before capital loss becomes irreversible.

  • Real-time checks on signal frequency, spread, and depth vs validated ranges.
  • Alerts for behaviour drift at the engine, book, and venue level.
  • Automatic notebooks for days that breach stress thresholds.
  • Versioned configuration so any change is traceable and reversible.

Governance & allocator views

Governance tooling exposes the same evidence set to allocators that we use internally.

  • Read-only dashboards for track-records, envelopes, and policy triggers.
  • Audit logs covering overrides, manual actions, and configuration edits.
  • Downloadable reports suitable for investment and risk committees.
  • Hooks to integrate with existing risk, OMS, and compliance systems.