Calibration Is Key to a Program’s Success or Failure
Across global markets, regulators have consistently enforced that a trade surveillance system is only as strong as its calibration.
The FCA puts it plainly:
“Market abuse surveillance across industry can take many forms. It is often challenging and complex. Appropriate tailoring of alert models, which we encourage for an effective overall surveillance program, may increase the associated operational risk at alert level.”
In the simplest of terms, calibration, though essential, raises operational risk if not done carefully. Their expectations center on two fundamental concepts of data transparency and robust testing.
You Can’t Calibrate What You Can’t See
Data mapping is messy, imperfect, and unavoidable. Firms don’t need perfection, but they do need clarity. A strong compliance program requires:
- Awareness of data quality issues
- Documentation of gaps in coverage
- Understanding how data limitations distort results
ACA’s Market Abuse Surveillance (MAS) solution tackles this directly. The Data Quality dashboard highlights validation issues, security identification failures, and pricing gaps, giving teams immediate visibility into the weaknesses that matter most.
Testing Isn’t Optional, It’s the Whole Game
Alert thresholds must walk the thin line of being too restrictive or too loose. If the thresholds are too restrictive, risky behaviors can slip through. If they’re too loose, teams can drown in noise.
Compliance failures stemming from alerts that didn’t trigger valid warnings are clearly of concern, but regulators have also acted against firms for being too permissive with their thresholds. The Australian Securities and Investments Commission (ASIC), for example, penalized a firm whose surveillance system would be at risk if it hadn’t generated unmanageable alert volumes, most of which went unreviewed.
Testing goes beyond simple threshold settings as well. Modern surveillance algorithms are far more sophisticated than a simple rules engine, using multiple types of metrics or data sources, generating results at various levels of aggregation, and offering extensive options for customization.
Relying on “tuning by intuition” is a risk for potential blind spots.
A more dependable approach is regular, structured, repeatable testing using real, historical data.
Introducing ACA’s New Backtesting Framework
To help firms meet regulatory expectations, ACA offers a fully integrated backtesting environment purpose-built for calibration.
With it, firms can:
- Create custom scenarios to experiment with algorithm parameters
- Run tests against historical periods or in parallel with live monitoring
- Analyze results using the full MAS analytics suite; every investigative feature is available to use on backtesting results
- Test new algorithms safely before activating, without the frustration of dealing with uncalibrated live alerts
A Workflow Designed for Real-World Compliance
The MAS backtesting experience is simple and audit-ready:
- Create, run, and re-run tests with different parameter combinations
- Share result sets with colleagues for collaborative review
- Automatically log every action with notes
- Publish finalized settings with a clean audit trail
The Bottom Line
Regulators have outlined their expectations clearly. Firms must understand their data and continually demonstrate that their models work. Calibration is no longer a periodic exercise, but an ongoing discipline.
ACA’s MAS backtesting framework makes that discipline achievable, transparent, and scalable.
Schedule a demo to explore how ACA’s MAS solution and backtesting framework can strengthen your surveillance program and help you meet regulatory expectations with confidence.