ESMA’s spotlight on data quality – Part 3: SFTR – a case of ‘valid but wrong’

ESMA's spotlight on data quality - Part 3 SFTR - a case of 'valid but wrong' image

Identifying SFTR errors should be rather like shooting fish in a barrel. In spite of this, ESMA’s latest data quality report simply did not uncover the bodies.

In this edition, there was no smoking gun, more of a smoking peashooter. In relation to SFTR, only one low-key material issue was uncovered in which three entities in a single jurisdiction had reported unfeasibly large securities loans.

Our SFTR testing experience indicates that there is a sea of price unit errors, haircut issues, incorrect quality, incorrect price, incorrect type, under-reporting (collateral, re-use, cash reinvestment & funding sources) among a litany of other errors. On top of that, many aspects of SFTR remain only partially defined or undefined, such that the resultant data is of dubious value, full of uncertainty and contradiction.

Data quality indicators dashboard

ESMA stated that it had been transitioning to a new data quality indicators (DQIs) dashboard for both EMIR and SFTR, with the SFTR implementation to be completed during 2023. Together with a data-sharing framework, this is how ESMA is proactively monitoring and engaging with National Competent Authorities (NCAs) on data quality issues.

In the existing case for EMIR, it includes 19 DQIs to detect and measure various types of misreporting, including under and over reporting, inconsistent reporting vis-à-vis the other counterparty, incomplete information in the key fields of the report, late reporting, abnormal values, and a lack of correct identifiers of the counterparties. This is computed by ESMA monthly, based on the entire EMIR dataset.

The empirical evidence we have from our quality assurance testing services indicates that ESMA’s methods – with dependency on extraordinary values, the validation rules and TR reconciliation processes – may only scratch the surface of errors present in reported data.

False comfort

In ESMA’s EMIR data, there appears to be a lot of false comfort to be had from the reduction in the percentage of errors identified under these DQI methods from a high of 27.6% to a low of 7.8%. If we were to speculate that ESMA’s methods identify only 10% of true outstanding errors, then a decline from 27.6% to 7.8% equates to less than a 2% improvement in actual data quality.

On SFTR specifically, ESMA identified implausibly high loan values (field 2.56) for securities lending trades, where loans from three entities in a single jurisdiction accounted for 77% of the total loan value in the market as a whole. On further investigation, it proved to be misreporting. This emphasises the importance of having SFTR reporting controls in place to identify ‘fat fingers’, spurious values and issues with units.

Five fields responsible for most errors

We repeatedly state that the validation rules, or indeed validations in conjunction with TR reconciliations are woefully inadequate measures for establishing high standards of data quality. We have also harboured doubts that the validation rules were always applied in their entirety, as we repeatedly see accepted messages we would expect to be ‘NACKed.’ However, ESMA has chosen to perform a “revalidation” process, largely to ensure that the TRs have applied the validation rules correctly. During this exercise, it identified issues with between 23 and 43 fields depending on which month’s run. However five fields were responsible for 95% of total failures:

  1. Collateral basket identifier (by far the biggest)
  2. Classification of a security
  3. Spread
  4. Floating rate reset frequency – multiplier, and
  5. Floating rate reference period.

We have observed a number of issues with the population of the collateral basket identifier and ways in which this field can be ‘abused.’ In some cases, we’ve seen individual bond ISIN codes entered (that are clearly not collateral basket ISINs), proprietary (non-official) ISIN codes and frequent use of the collateral basket identifier (typically in relation to securities lending business) to avoid having to provide further collateral detail.

With regards to classification of a security (CFI code), until very recently (and the publication of Q&A 15), it has been the only SFTR field with a golden source, namely the ANNA ASB database. In our experience, it is common to see the default character ‘X’ used in place of correct characters, CFI codes that bear no relation to the security in question and inconsistencies with the values published in the ANNA ASB database.

The final two fields mentioned, floating rate reset frequency – multiplier and floating rate reference period – time period, in our experience, are both impacted by default values (particularly one) and inconsistent inputs that do not make economic or standard market practice sense.

Fit for purpose?

Sadly, the conceptual issue remains, the vast majority of incorrect reports pass the validation rules – the classic ‘valid but wrong’ problem. SFTR is crying out for greater regulatory certainty, precisely defined fields with limited acceptable parameters. The doctrine “counterparties should agree amongst themselves” simply does not work. A regime with a litany of errors and extraordinary operational burden, is simply not fit for purpose.

  • Read the Data Quality full report on ESMA’s website
  • Read our insights from our regulatory experts on the EMIR and MiFIR aspects of the report
  • For a free healthcheck of your SFTR reporting or a conversation with Jonathan or another regulatory expert, please contact us.