March 18, 2018

The natural beauty of organic convergence

... in an agile world.


I am a big proponent of the concept of 'Organic Convergence'. Something I have come up with and what goes on from the assumption that within a social group norms and values ​​become a standard, a common ground. The degree of interaction between the members within the group is decisive for the speed with which this happens and the benefits that can be reaped from it. More interaction means faster and more valuable convergence. And in doing so, I foresee that when the different teams work together, forming a social group, uniformity will result. With the added benefit that the result is broadly supported but not imposed.

The world is organic and evolutionary. At times it is revolutionary, then we are dealing with disruption. Disruption is beautiful because that turns you into a market leader and increases your chances of survival. This, of course, only when you are the disrupting force.

So where is this coming from? Well, it's coming from a discussion I had several times in the past weeks about Canonical Data Models (CMD), Enterprise Resource Models (ERM) and centralisation of governance in an organisation that just decided to execute a full-blown agile transformation over the coming years.

This organisation's legacy is, like so many other organisations with a challenging bureaucracy, is one of centralisation of capabilities (see my view on this in Perish or Survive, or being Efficient vs being Effective and an duo of posts on Competence Centres in huge organisations here and here) and a fairly directive management style. Although, as in many organisations, this one has come a long way, but still the overall cultural view is that teams are not autonomous nor self-proficient and Enterprise or Domain Architects are to define standards across the enterprise. Mind that these standards are not on enterprise level topics like principles, frameworks, etc. No in many cases the standards are about what technology to use, what architectural framework to use, or what model to implement. Down to the level of programming language and style.

Some context on the CDM

In this section I am elaborating on the concept of a CDM and why it is a remnant of centralisation in IT organisations. You can skip to the next section if you would like so.
The discussions I referred to at the beginning of this post were about whether or not an enterprise wide model had to be enforced top-down. Or possibly should emerge bottom-up. The discussion was about CMD (Canonical Data Model) and ERM (Enterprise Resource Model). Now the thing here is that the 'M' in the naming is not helping. We're not talking about just a model, but also its implementation. The CMD is a remnant from the days in which Enterprise Application Integration (EAI) was something new. In the late 1990's and early 2000's integration of software systems and thus creating synergy was key in many IT organisations. Different systems used different information models. What one system considered a Customer would be different from another system's definition. Typically across business domains these differences were huge. Then there is the problem where two departments in an organisation have different names for the same concept. Understandably, the CDM is going to be extremely helpful. In many ways, a CMD resembles the synthetic language Esperanto.
But unlike Esperanto, the CDM is a model. It's a definition and not an implementation. It becomes all the worrisome once the CDM gets implemented using one single implementation. The problem here is that definition and implementation all of a sudden become one. This is a problem because implementations require technical specifications like data-types and formats, which diverts the discussion from semantics towards syntax. Or in other words, the CDM becomes an IT concern instead of a business concern.
Typically the CMD gets implemented in a centralised part of an architecture. The main database, the ERP system or as part of the ESB. And historically this sort of made sense. When the CDM was invented, it had to also be implemented and preferably in a single location. Considering the focus on efficiency in IT, centralisation was paramount. Definition and implementation would go hand in hand, developed in parallel. Because of the challenges at hand, mainly the integration challenges, the CDM evolved into the 'language' on the ESB. Everything on the bus would speak the same language, when needed a connector would translate from and to the common language.

In those early days of EAI, the challenges were merely integrating the organisation's own systems. All within the organisation itself. Considering the context, a CDM made perfect sense and I would argue that based on the peculiarities of these environments, having a CDM and use it as the common language on the bus is the right choice. Think about the CDM not being Esperanto, but Hindi in India. Within a very neatly scoped context it was a common language.
In many occasions I have struggled with inconsistencies in information models for different enterprise systems and a CDM would have been or was the best solution. In my experience, the best way to deal with a CDM is on the ESB. Selecting the ESB as the integration model of choice allows for the best leverage of a CDM implementing the integration.
When you add a centralised department that will govern the CDM to the mix, preferably with the organisation's recognised authorities, like domain architects and enterprise architects, and you're in for a treat. Mind that a CDM needs tight governance and is high maintenance, so centralised governance would allow for relatively low costs and huge gains. Even in environments where the number of to be integrated systems is small or even tiny, like in the 1990's and the first 5 years of the 2000's, it is important to have just a single definition of each item in the model.

The Directive Culture

If you assume that you know what is necessary from a business concept perspective, then it is obvious to centralise and decide for yourself what to do next and how. This is how traditionally CDM's came to be around and currently that translates into Enterprise Resource Models. You'll find a group of (self-proclaimed) experts that define an enterprise wide business concept model (ERM) and by decree require everybody to use this model for all inter-operability between business applications. Within the context of a small business scope, like a business unit, this is sometimes doable...

However, in today's complex organisational environments this is no longer feasible. For all sorts of reasons, but within the context of today's enterprises, the complexity, also in vocabulary, of cross domain businesses and the movement to work more and more on short-term goals with ever shorter cycle times, make this approach practically unfeasible. For the simple reason that centralisation automatically also means that you introduce a bottleneck. After all, everything has to go through the same funnel in the organisation. More teams that invoke this centralised department more often generate a higher burden, which can only be solved through organisationally scaling-up, i.e. more people. And that is costly if at all possible. Automation is not a solution either, because this central department has to invent and deliver new things. Think of it as a team of linguists inventing a new language, enforcing everybody to use that language, but not teaching them to speak the language. And at the same time, requiring this team to come up with translations of new words from a multitude of languages into that new language. It's just not doable.

It get worse when it is expected of this centralised team to not only come up with translations, but to actually invent new concepts. And keep in mind that it is often these teams themselves that have this expectation of being able to prescribe the ERM and/or CDM.

The answer to this problem is to decentralise this task. Something that is not unheard of, because crowd-sourcing has been the solution for organisational scalability issues for years. The central role is then one of governance. This requires two major efforts.

  1. Clear definition of what the rules are. 
  2. Clear processes to maintain the rules.

Mind that both are critical for success and extremely hard to accomplish. Granted, defining a workable set of rules by which to live is hard by itself and requires not only the proper mindset and experience, but also intrinsic authority. But unless there is a workable process to maintain these laws in order to keep them relevant and actually act accordingly is one that requires quite a bit of perseverance.

The Meta Problem

The interesting thing about this is that it is almost a meta problem. Because with the introduction of another centralised function there is again a bottleneck. Scaling-up will again become a problem. However, now there can be some automation. After all, good rules are ideal for automating verification of compliance. When the work is automated, decentralisation of the execution of that work, bringing it closer to the source, nothing stands in the way. From the LEAN idea, that is also a specific form of waste to eliminate.
And this is where it becomes interesting. After all, is it not necessary to check that these automated processes are being carried out? And of course the answer is a well sounding and heartfelt 'Yes'. And if we set off from a Directive Culture, we will of course centralise that control again. But in a Learning Organisation that is of course a no-go, because centralisation does not solve the problem and ultimately results in decentralisation. Much more valuable for the organisation is that on the basis of accountability and responsibility the culture is such that there is a clear sense of ownership in which quality assurance is intrinsic. This is not possible if your starting point is scepticism regarding the ability to take ownership, or in other words to succumb to the tendency to create a controlling authority. The basic assumption must be that quality assurance becomes a common practice that can be realised with (intensive) guidance and positive incentives.

The above therefore argues for and will result in an environment in which trust predominates and is therefore a good breeding ground for a safe working environment.

The Decentralised Models

First of all I want to establish the notion that for an ERM or a CDM to work, we need to remember that these are merely models. There should never be an implementation aspect involved with these models.

For these models to work, it is required that they are broadly supported within the user-base. The model will only be adopted by a user when it is applicable in her problem domain. In other words, the model needs to be relevant. And relevance can only be established when there is practicality.
If you think about this, you'll understand that in general there, initially, not be a single resource model that can be used across the whole organisation, probably not even one that can be used across value-chains. Which of course makes perfect sense, because two separate value-chains implement two separate business products, which in turn will address two separate business needs. Hence, different business concepts will be at play.
It would be futile to assume that one can come up, out of the blue, with a resource model that can be applied in both of the two value-chains. And any attempt to do so, should be stopped, immediately. Instead, two different models should be created and a third as well when a third business product is developed.
Meanwhile the involved architects, both domain and product architects, should start identifying the commonalities and differences in all resource models and formulate a shared model across these value-chains. (Read my post on agile architecting to understand how architects provide a huge benefit to organisations.)
This interaction between architects, especially when you join domain and product architects in this interaction, will result in a core of related business concepts that form a model on which the organisation defines its business products. Generates business value if you will. In a large organisation with several business units, operating in different verticals, it is not uncommon to have several of these core models. It would be the wrong activity to combine these into a single model. Doing so would result in an artificially created model which is not applicable other than in academic discussions and of no benefit to the organisation.

Organic Convergence

The model that stems from this, be it a resource model, a data model, an information model, is one that emerged from practical use. By definition it is therefore applicable and usable. In addition, the model consists of aspects of several models that converged such that commonalities and differences are considered and leveraged to add value.

I call this 'Organic Convergence'.

It is something came up with during my discussions and what logically results from the assumption that within a social group norms and values ​​become a standard, a common ground. The degree of interaction between the members within the group is decisive for the speed with which this happens and the benefits that can be reaped from it. More interaction means faster and more valuable convergence. And in doing so, I foresee that when the different teams work together, forming a social group, uniformity will result. With the provision that the result is broadly supported but not imposed.

Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link of my blog to all your Whatsapp friends and everybody in your contact-list. But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.