EMIR level 2 validations – are you ready?

At the end of October ESMA is introducing new validations to be applied by Trade Repositories (TRs) which I think will be a good thing for reporting quality.  In this article I provide some context to the changes, but contend that validation on its own is not enough to deliver quality reporting.  Validations cannot identify records that are valid but wrong.  For that we need a much smarter approach.

ESMA and national competent authorities have expressed concerns around the quality of reporting today and they expect firms to address this.  But to promote quality ESMA has taken its first step towards improvement with the introduction of additional validations.

Some background

ESMA originally required the TRs to apply very limited validations to EMIR reports. The idea was that the TRs would not reject records because of problems in data quality and submissions would not be delayed by problems with one or more data fields within the report.  Therefore ESMA and other regulators would have access to the data that was available irrespective of quality.

This approach makes more sense the more data that is submitted with each record.  The reason is that with strict validation, a single error in one field would prevent the remaining data from being received by the TR.  However, this approach requires the reporting entity to have a strong control framework in place to ensure that errors are self-identified and rectified.

The limited validation approach initially adopted by ESMA differs markedly from that taken for Transaction Reporting.  On launch of the Approved Reporting Mechanisms (ARMs) regime in November 2007 a detailed set of strict validations was specified by the FSA (now the FCA) to be in place before an ARM would receive authorisation. These validations are applied by the ARMs to all records submitted by reporting firms and if the record failed a validation it would be rejected by the ARM and the firm notified.  I am calling this approach strict validation.  Many firms apply their own validations prior to submitting their records to an ARM and the FCA also applies strict validation at the point of receipt creating multiple layers of strict validation the vast majority of which is duplicative.  So we see very low industry rejection rates for reports sent by ARMs to the FCA.

Benefits of strict validation

The primary benefit of strict validation is that the submitting firm keeps ‘holding the baby’, i.e. they know when their submission has failed and will be in breach of the timeliness obligations.  They are in effect forced to act to remedy the problem.  The reporting entity does not need to have a good control framework in place to know that they have a problem that needs fixing.

ESMA has now undertaken a volte face and now requires the TRs to apply strict validations.  Nearly every field is subject to some sort of validation requirement per the L2 validations.

The change in validation approach clearly introduces new challenges and costs for firms as well as the TRs.  These validations would have been better implemented before EMIR go-live, but it has taken the experience of poor quality data to crystalise this decision.

So what should firms be doing to prepare?

There is the immediate problem of ensuring that you will be able to continue to report post October.  That means assessing the impact of the new validations on existing positions and correcting in advance any errors that would result in updates to those positions being rejected.  Similarly, firms should consider whether their process for dealing with rejections can handle an increase in volumes particularly in the early period.

More than that, firms should be looking at how to identify records that are valid but wrong.  This requires testing of the EMIR reports at the Trade Repository and reviewing the adequacy of their control framework over EMIR reporting more generally.  Errors can be introduced into the reporting process or are a result of an incorrect interpretation of the reporting requirements so care must be taken to ensure that the testing that is conducted doesn’t build in the same interpretation errors.

More can be said on testing but I will leave that for another post. If you have any views on the above I would be keen to hear them.