Validation that focuses on behaviour, not just returns.
Dovest’s validation stack is built to show how the engine behaves under stress, not just how it performed in one favourable sample. We track what the engine was allowed to do, what it actually did, and how that behaviour lines up with its stated envelope.
Validation Stack Overview
Feed integrity, survivorship-bias controls, and corporate-action handling audited before any strategy-level test is trusted.
Out-of-sample protocols, parameter freezes, and regime-aware testing that prevent the engine from being fit to noise.
Paper and low-risk capital runs that compare live behaviour against the validated envelope in real microstructure.
Third-party style review of track-records, policies, and monitoring to confirm that the story matches the evidence.
A Dovest track-record is not just a line of P&L. It specifies the engine configuration, the rules that governed it, and the set of opportunities it was allowed to take but chose not to.
Policy-anchored performance
Each period of performance is tied to a frozen ruleset, risk envelope, and deployment scope, so behaviour is explained by design, not by ad-hoc overrides.
Engine cohorts instead of products
Track-records are grouped by engine identity and environment, so allocators see a stable behaviour profile instead of a marketing share class.
Hit-rate vs allowed signals
We record how many safe opportunities were presented by the signal stack vs how many were actually traded, separating engine design from capital constraints.
Stress validation focuses on the engine’s decisions during volatility spikes, gaps, and liquidity shocks. The goal is not to avoid all pain, but to show that losses remain inside a pre-agreed envelope.
Shock-day notebooks
For major stress days, we produce a narrative: what the engine saw, what it was allowed to do, and which safeguards activated.
Regime-aware stats
Drawdowns, win-rates, and risk metrics are broken down by volatility and liquidity regimes instead of averaged into a single number.
Behaviour drift flags
Quantitative thresholds highlight when live behaviour diverges from the validated pattern, even if headline returns still look acceptable.
Validation defines the conditions under which the engine is expected to stay inside its risk envelope — and what happens if it does not. This gives allocators a concrete contract around maximum pain, not just an expected return.
Max-pain scenarios
Scenario analysis showing how fast losses can accumulate under clustered gaps, liquidity withdrawal, or extended hostile regimes.
Capacity-aware testing
Scaling tests that show how slippage, turnover, and crowdedness evolve as capital grows across venues and universes.
Halt & de-risk rules
Pre-defined policies for de-risking or pausing the engine when drawdown, slippage, or behaviour drift cross agreed thresholds.
Monitoring pipelines
Live monitoring tracks both performance and behaviour features so early warning signals appear before capital loss becomes irreversible.
- Real-time checks on signal frequency, spread, and depth vs validated ranges.
- Alerts for behaviour drift at the engine, book, and venue level.
- Automatic notebooks for days that breach stress thresholds.
- Versioned configuration so any change is traceable and reversible.
Governance & allocator views
Governance tooling exposes the same evidence set to allocators that we use internally.
- Read-only dashboards for track-records, envelopes, and policy triggers.
- Audit logs covering overrides, manual actions, and configuration edits.
- Downloadable reports suitable for investment and risk committees.
- Hooks to integrate with existing risk, OMS, and compliance systems.