Mastering Master Data??

Hi Fellow Architects,


Master Data has almost become a boring topic these days but I still find that many orgnaizations have not yet either harnessed the power of their master data or else have not really turned the corner on the "Master Data Project" which is now becoming an unwieldy expensive never ending project. 

I was wondering if any of you have had success stories in this space and have been able to come up with some simple best practices on tocpics related to master data such as the following -

a) how do you determine what is valuable information to be mastered and what not to?

b) how do you determine if the master data continues to be of relavance to the enterprise?

c) how the profile of master data has changed in the world of social networking etc.?


Your feedback is always welcome!!
--
Best Regards.

surekha -

Comments

  1. Master data boring? Never! Sure it looks like boring static data to the casual observer, but like tectonic plates shifting very slowly underneath the ground that rarely but catastrophically cause fun things like volcanoes, earthquakes and tsunamis, calm surfaces often hide instability with potentially huge consequences if not addressed head on.

    Valuable data worth the cost / complexity of mastering fit into one of two categories: either broadly used data that analysts complain about frequently due to the pain it causes, or data that is so obvious and well known to leadership as being part of the information landscape, that they take it for granted, and whatever state the data is in, good or bad, they don’t bother trying to improve things even if they should. The first type, painful data, requires discovery sessions to find out where info consumers spend their time doing inefficient master data copy/pasting/ sorting/merging/ cleansing/synching/etc via desktop tools like excel, access, etc, on a regular basis. The second type, obvious data, are by definition known at the highest levels of the organization, but the conversations about this data might occur at a similarly high level of abstraction (eg ‘Customer’ as opposed to a specific Customer Identifier or Classification), so drill down will be required to get to what the valuable data is and what issues are associated with it.

    Determining ongoing relevance is not an objective measure. Just because a particular code or hierarchy is used in many places doesn’t mean anyone cares about it or perhaps users even going out of their way to not use it. I have seen master data attached to newly developed reports only because that data was on legacy reports and the decision makers don’t want to run the risk of doing harm by taking something out they don’t know is being used anymore. Even worse, I’ve seen a case where a particular code is used for summarization in a report, and users in more than 1 group spend significant and ongoing manual effort breaking out the detail into an alternate summarization more useful to them. The best way to measure ongoing relevance is to find the data with formal or informal governance processes around them that manage change and notify appropriate parties of impact – or conversely, find the data where the most trouble seems to keep popping up – if people didn’t care, they wouldn’t complain. In these areas, if there is not formal governance in place, there should be – this will keep your master data both valuable and of ongoing relevance.

    ReplyDelete

Post a Comment

Popular posts from this blog

Issues with Server Virtualization

Data Aggregation & Data Discovery - Part I

What is the definition of a strategic partnership with a vendor?