Business Data Management – Finding Our Nirvana

Guide data collection is proving to be a painful thorn in the side associated with financial services firms seeking to implement enterprise data management (EDM) strategies. Dependence on manually collected data could scupper drives towards enterprise information management in the financial services industry.

While data projects remain high on the agenda for the industry as companies continue to centralise data functions plus manage costs, many are struggling to deal with the enormity of manually gathered data that flows through their particular operations.

The financial services industry offers dedicated considerable thought and resources towards achieving the nirvana of a single complete and correct edition of a core dataset. What is seldom discussed is how to deal with the exceptions and the non-vendor based content. However , there is an increasing awareness of the risks related to manual data collection and its factor to valuation errors, missed deadlines, overstretched resources, scalability constraints as well as operational risk.

The changing regulating environment and international accounting requirements are further adding to the need for higher transparency.

Eradicating manual data selection can help resolve all these issues concurrently. The biggest challenges lie with illiquid fixed income and over the counter (OTC) derivatives where the structured nature of the assets make data less clear and therefore, not particularly easy to gather. But organisations are struggling to capture complete and accurate information for instruments such as American depository receipts and contracts for difference, where an underlying security can add confusion, as well as all variants of money. Even mainstream activities such as unit trust pricing can prove troublesome. These types of challenges are not only limited to pricing data but extend to cover income plus capital events as well as asset recognition and static data.

There are numerous techniques on the market that can assist organisations to construct their own data management platforms. These can then add value managing the bulk collection, storage and processing of readily available information.

Building a process for capturing and processing the existing data feeds might improve clarity and transparency. Nevertheless , this is far from an all-encompassing information initiative. Many organisations can achieve good levels of automated processing for the almost all their data, but manually gathered data often remains untouched from the EDM strategy.

If manual information that is input to a central information platform is not subjected to the same stringent routines as readily available data, it is going to create background noise and dilemma. If multiple sources are used to confirm listed content but only a solitary manually input entry exists regarding other datasets, consistency can never be performed.

Building solutions

It is widely acknowledged that an entity’s overall performance is limited by weakest component. In data management terms that will be the human element. Guide data collection will still exist as soon as all of the readily available content has been automated and will continue to cause problems and chip away at quality, expenses, resources, management and reputations.

The challenge facing the industry is finding a way to collect, database, normalise, reconcile and validate all data even for your labour intensive and risk susceptible manual data. The core principal of automating data collection is unquestionably the correct approach.

Automation is the key to removing the weakest link in the data management chain – human error. Computers don’t care if they are performing mundane tasks or not, nor do they care whether, as being a senior computer, they are performing junior tasks and are not being achieved. Credit crunch worries or deciding what exactly they want for lunch fail to distract computer systems.

Add complexity to mundane jobs such as having to collect data from numerous sources and utilising various emails, websites, extranets, terminals, internal departments (whose primary function is just not to provide data to other teams) plus, ultimately, having to contact somebody from another organisation, and it’s easy to see the reason why data management is such a complex, labour intensive and fragmented process.
If you adored this post and you would certainly like to receive even more info relating to Etm kindly check out the web site.

The information, once gathered together, typically resides on an array of spreadsheets, with colour coding and bold fonts to prevent the various users falling through the cracks and locked cells to stop 1 user deleting another user’s macros all without audit trails to aid with unraveling queries or problems.

In this scenario, it is easy to see why mistakes happen so frequently. This is a very common situation that we see and is the starting point for defining a potential data option. The data is generally available somewhere, just not in an easily accessible form. It could be imbedded in an encrypted PDF, a Phrase document, spreadsheet, email text as part of a distribution list or on a website/extranet and many more.

This problem is improbable to go away in the future. The actually expanding range of instruments will continue to pose difficulties. The advent of additional trading systems and the growth of off-exchange and algorithmic trading means liquidity has been forced into less transparent pockets. The collection of the pricing and reference data for these instruments will continue to be challenging.

Clearly a defined strategy to tackle manual data collection is necessary. Technology alone cannot solve all the issues in this area. A rigid, purely technologies focused approach can lead to silos associated with data, with any individual successes and gains unable to be reused somewhere else, potentially resulting in the proliferation associated with mini data management solutions, without correlation.

Flexible automation

Combining the flexible model with process controlled applications is critical to successful information management initiatives. Furthermore, spreadsheets solely are not the solution. Databasing the content will be the logical starting point, allowing it to be maintained via controlled processes with complete audit trails.

A small gain in a single area is all well and great but if it adds problems possibly up or downstream, no overall gains are made. The objective is to handle every possible step, but flexibility should be maintained to ensure an automated process can be adjusted to accommodate changes plus evolve with the advent of similar needs elsewhere in the overall process. Flexibility in individual solutions allows successes to be combined, leading to marked enterprise-wide gains.

This type of solution is not easy to obtain and requires bespoke systems, skills and maintenance, but to make significant gains and provide scalable solutions that may evolve as business requirements modify, flexible automation is a must. With the ability to reuse the framework or design template from each individual success for other similar challenges, it is clear to see just how versatile technology based solutions may contribute to executing the high level data management strategy most efficiently.

Nevertheless , exceptions do and will continue to take place, and when they do, experienced staff is going to be required to solve them. With senior administration staff and analysts taken off the shackles and monotony associated with actually collecting data, they can concentrate more clearly on applying their particular knowledge and experience to those situations that bring about it.

So is the ultimate goal associated with fully integrated, enterprise wide data management initiatives actually achievable? Possibly it is nirvana. It will certainly remain placed safely out of the way without a method for handling manual information. With years experience of introducing solutions for the myriad of challenges that clients and prospects face in the extremely pressured valuation environment, very few organisations fully understand the principles needed to provide such solutions. Focused and skilled specialist suppliers of data approval services must develop tailored procedures to automate ‘manual data’ catch and validation. Experience tells us that there are no quick fixes.

Leave a Reply

Your email address will not be published. Required fields are marked *