Secrets of Operational MDM – Part 1 : Choosing System Behaviors

Ok I’m probably a bit crazy but despite all the angst, minutia and roadblocks I really enjoy the work in getting the most out of data. And while a lot of focus is currently toward analytics implementing Master Data Management (MDM) to improve operational systems is a very useful first step. This yields near term benefits for days to day operations AND improves downstream analytics.

In fact I enjoy this enough that I decided to write a three article series on Operational MDM. In each article I will discuss a key architectural concepts that make implementations more successful. They are:

  1. Categorizing systems as consumers, creators and managers of data entities
  2. Understand how each should contribute to the mastered entity
  3. Determine at what level consumers should integrate with the MDM system

Background

A very simplified example, if you buy something in a store that has a loyalty program you walk up to a cash register hand someone something to buy and start a wonderful set off processes,  They need to figure out who you are for the payment system, and for the loyalty system.  The inventory control system wants to know the product as does the loyalty system (to figure out offered for you in the future).   All of these systems are maintaining transactional information and most have independent ways for they customer sand product information to be managed.

Each of these systems provides a vital business function and is likely doing it just fine today.  But often adding new functionality or understanding all the transactions associated with an entity across these systems is tedious and rife with errors.  This is where Operational MDM helps. And it needs to provide the ability to ensure each system’s versions of an entity can be harmonized while being flexible enough not to minimize disruption to those systems.  And oh by the way need to allow that other legacy system from next month’s merger to be on-boarded…

The first step to doing this is look at all the systems that touch the master data and determine what they do with it.  The main behaviors are

  • Consumers – Systems that need to understand a entity (e.g. customer, product, …) exists to identify the entity and capture transactional information about that entity
  • Managers – Systems that allow updating of the information about an entity
  • Creators – Systems that collect information about an entity that is currently unknown and then create the entity so that future transactions can be created

These are not mutually exclusive, and for example it is pretty common that a system that creates an entity also manages and consumes them.  Also capture the behaviors a system has today and consider what they should be after implementing MDM.  Often as part of designing and building an Operational MDM system it is desirable to start moving systems that create, manage and consume entity data to primarily being consumers.

For Managers and Creators of entities it is also important to note what specific types or sub types of entities they operate on and what process step makes them do so. This becomes critical information when trying to improve the workflow and improve operational effectiveness. By clearly categorizing the integrated systems by their interaction with master data you will be prepared to implement best practice integration decisions on how they will find and use entity data from you MDM implementation.

In the next posting I will focus on how each of these type of systems need to contribute to the MDM.

SaaS 2.0 time for an Uncloud™ approach

SaaS has been around for a while so now it’s important to start thinking about SaaS 2.0.  SaaS 1.0 allows applications to be run and maintained by a SaaS provider in a remote data center on behalf of your organization.  This gives your business users the agility to quickly obtain the benefits of these new software packages while reducing the requirement of your IT build and maintain expertise needed to manage the applications.

However as organizations change, grow rapidly, merge, or divest they require even more agility.  They need to be able to seamlessly move data in and out of, and integrate across these SaaS applications.  This is exacerbated by the number of different SaaS solutions organizations adopt.  Depending on the data movement, privacy requirements, and the IT skills of an organization where the actual software is deployed matters.  Often it can make more sense to deploy inside your network where you have more bandwidth and lower latency.

Currently there are advocates to take a hybrid cloud approach which marks the beginning of this. The hybrid approach is really about combining traditional on premise and cloud solutions. However by using available automation for deploying and maintaining systems as well as current containers like Docker SaaS vendors can maintain their software within your network. This is really starting an UncloudTM  approach that I see defining SaaS 2.0 going forward.  With an UncloudTM  approach SaaS 2.0 vendors will maintain and manage applications that may be in the cloud or in your data center, and applications can be migrated freely between the two locations as needed.

With an UncloudTM approach you will gain the advantage of having your different SaaS products in the same network enhancing connectivity and allowing vendors to manage the software.  Allowing you to decide of what SaaS vendors to use and separately decide where to have your data.

Top ten truths about data projects

 

CIO-3

#10
Money is like data if you invest it, manage it and protect it well it can pay off immensely. But do any poorly and you’ll regret it.

#9
Development methodologies keep changing… mostly in name.

#8
The only thing more expensive than free software is free software implemented by the lowest bidder.

#7
Master Data Management is a transitional state until you get to the fully integrated environment… And once you’re there you’ll need to add another system.

#6
Big data is incredibly valuable, unless someone forgot to govern it.

#5
Agile is great, but knowing your real requirements is better.

#4
If data governance is painful, too slow or too costly its being done wrong.

#3
Choosing the lowest cost integrator is like choosing the cheapest plumber…  Once they’re done it looks great!!The flood comes later…

#2
Data is great but like a teenager it has a tendency to just sit there; it really can be useful at least when it’s finally in motion.

#1
Business logic and data handling are like two parts of epoxy; once they’re mixed you are stuck for a long time.