December 27, 2013

The ESB addresses a niche, not mainstream

Other posts in this trilogie:

ESB's don't scale, which shouldn't be a problem because that is by design
The demise of the ESB in a world of webservices

There are many reasons not to have an ESB in your environment. The ESB as we know it has less and less a reason to be implemented in today's world of webservices. There's a full post based on the dimise of the ESB in a world full of webservices.
But this post is about why you should have an ESB and why it does make sense to implement it. Or rather, when it makes sense to implement it.

This is a post in a series about the Enterprise Service Bus, ESB, an acronym so easily to love or to hate. Parts of this post are also found in another post on the dimise of the ESB in a world of webservices.

Remember what the ESB was good for?
  1. Data Exchange
  2. Data Transformation
  3. Data Routing
Well with SOAP being XML based and arguably very wel defined, the Data Transformation is not that relevant anymore. The Data Exchange with SOAP is handled as well, you use HTTP(s) in pretty much all the cases you come across it. Leaves us the Data Routing.
So what was the routing about? Well, it was all about having a logical name resolve to a physical location, where the ESB would determine where to send the message based on an address that one can remember, or that is not tied to a physical location. In the world of HTTP(s), which is dominated by the world of TCP/IP, this is actually handled by the DNS (Domain Naming Service). The DNS can perfectly handle this translation and do the routing. It's what is was designed for. Almost its raison d'etre. Maybe not even almost.
So in a world where we do SOAP based communications, there really is no reason to have an ESB.
What do we keep the ESB around for?

You keep the ESB around for the many-to-many messaging. You keep it around for those cases where you have one or more sender of (similar) messages and one or more recipients of these messages. Because the ESB is a great message dispatcher, it's the message routing part that kicks in. It's something the DNS can't do. In those cases where you want to send the same message to an unspecified amount of recipients where the recipients can register or deregister to receive messages at their leisure you'll want the ESB because managing this at the client side, maintaining a list of recipients and sending each message to each of the registered recipients is bad for performance, susceptible to errors and complicated.
The ESB is great at dispatching the same message to multiple recipients, doing transformations of the data on the fly. So duplicating a production load of messages into a test environment through the ESB, where the ESB is transforming the data just in time before sending it to the test systems (e.g. anonymizing the data, translating from one spoken language to another, transposing id's such that they align with keys in a database etc) is a common practice.
I have worked at a financial institution where we fenced the core banking systems with an ESB, catching every transaction send to the corebanking system and duplicating it to a standby system as well as a test system in a slightly altered form to be able at one side to have resilience at an extremely low price and production load with relevant data in a test environment.

Basically this means that you should keep the ESB around for being able to follow the concept of publish/subscribe. In my opinion the only right concept when it comes to asynchronous communciations. And since this can only be done by either using lower level network protocols like UDP (TIBCO's TIB/Rendezvous was built on UDP and the first real pub/sub message bus), or by having a central dispatcher sending a message to the registered recipients, or by handling it all at the sender side it makes most sense from a security as well as a manageability perspective to do publish/subscribe through a centralized dispatcher. Although for the really lean and mean high performance situations, you want to resolve to lower level network implementations.

Understanding pub/sub and implementing it in the enterprise allows for maximumenablement of the business with minimal IT investments. The afore mentioned use-cases of the monitoring systems where all systems publish monitoring messages (on business level) and various systems subscribe to them, for example a dashboard, and you'll get a very rich BAM (Business Activity Monitoring) solution, which without much additional effort can be used as an application level monitoring solution as well.
The other use-case where production feeds were used to implement improved availability as well as the capability of testing a new release with relevant production loads is of course pretty awesome as it didn't need a lot of additional work once the ESB fence was set up.

So dispite popular believe, there is a reason why you should consider implementing an ESB in your enterprise. But don't adopt it as the default means for communications between applications, instead apply the ESB as a niche product and not as mainstream.

As always, I'm really interested in your views and ideas. More perspectives make for a richer understanding of the topic. So please share your thoughts, stay respectful and be argumentative.


December 23, 2013

ESB's don't scale, which shouldn't be a problem because that is by design

Other posts in this trilogie:
Let's address that scalability issue. First of all, an ESB doesn't scale. That's by design. It was never intended to scale, it was intended to be the pivot in enterprise communications.

This is a post in a series about the Enterprise Service Bus, ESB, an acronym so easily to love or to hate. Parts of this post are also found in another post on the dimise of the ESB in a world of webservices.

ESB's are not buses at all, we only draw them as a bus or rather as a tube to which everything is connected. We draw them the same way we draw TCP/IP networks. But they are actually the hub in a hub-and-spoke model. They have to be, because they are the middleman handling all the Babylonical speak impediments within an enterprise.

An important aspect of the problem of not being scalable is state. ESB's have the crucial design flaw of being able to keep state.

Remember what the ESB was good for?

  1. Data Exchange
  2. Data Transformation
  3. Data Routing

When you talk to an ESB vendor about what the ESB can do for you, the vendor will tell you the three features I've listed before and an important 4th one. Well, the vendor thinks it's important, I believe it's a design flaw. This feature is "Data Enrichment". What the vendor means is that you can send a half-baked message to an ESB and before the ESB is delivering the message to its final destination, the recipient, it will call all kinds of other 'services' to enrich the original message with more information which is needed to deliver the message and have it understood, processable, by the recipient. This means that the ESB needs to keep state, while enriching the message. It also means that the ESB is no longer a mere intelligent data transport that routes and transforms on the fly, but it has become an application, a business application.
Because the ESB is designed to be able to do this, the ESB is designed to be able to keep state. And thus it doesn't scale. It scales as far as the box it is running on can scale. Scalability is all vertically.
There's another problem with the scalability and that is the dependency on physical resources. The queues of an ESB are physical, they're filesystems, databases or something else that's physical and allows for some persistence, or even complete persistence. This means again, that it doesn't scale because it needs coordination of accessing these resources because they guarantee sequence.

When scaling an ESB, it needs to be setup across nodes and there needs to be a vivid dialogue between the nodes about what each node is doing and has been doing, this is pure overhead. The busier the ESB is, the more chatter, ie the more overhead. This will require more nodes which requires more chatter. The result is a very disappointing performance curve.

Don't worry, this is all by design. The design here is that the ESB should be doing more than just be an intelligent data transport, because were it just an intelligent data transport, it would not have any added value in a SOAP ruled WS* compliant world, which is today's world. There's a whole blog post on this.
The ESB is designed to have added functionality to warrant the purchase of its licenses. This added functionality is allowing the sender (or client, or consumer) to not care about the sequence of its messages, because the ESB handles this. But that's a moo point since the client will either do a fire and forget (a feed) and not worry about sequence or it will wait for the response to a request before continuing processing. No added value at all. But the ESB, by queueing also ensures the sequence of requests or messages from several clients. So the recipient (or producer, or server) gets the requests or feed in the order the ESB receives them. Which means nothing because this may not at all be the same sequence the clients did in fact send them. Think about latency of different clients causing the ordering of messages going boinkers all over. Meanwhile this desparate need to sequence requires that there is a single point in the ESB doing all the sequencing. Which means dropping the desire to be scalable.

Ask the ESB vendor again why you should get yourself an ESB after reciting the previous, and your vendor will likely start talking about monitoring. How convenient it is to have the ESB doing all the monitoring for you, keeping track of an audit trail or at least a message trail because it is the central hub through which all the messages flow. Emphasizing the ESB's main flaw again, it being a central hub.

The ESB doesn't scale and it shouldn't be applied in an environment where it is meant to scale beyond just a few boxes. This is why the 'Enterprise' in ESB is misleading. It makes far more sense to have several ESB implementations in the enterprise, serving specific needs. There is a place for the ESB in the enterprise, trust me.

As always, I'm really interested in your views and ideas. More perspectives make for a richer understanding of the topic. So please share your thoughts, stay respectful and be argumentative.


December 5, 2013

A Bottom-Up approach to Maintenance and Support doesn't make any sense

First of all, please note that my view on IT is that it services the user and when used in an enterprise, it services the enterprise. Typically we refer to the enterprise part that IT is servicing as 'The Business'. There is no other raison d'etre for IT other than to service.
With that being said, it should come as no surprise when I say that I consiser IT to be a tool. It's a tool that is supposed to be utilized.

With that out of the way, let's move on to the topic of Maintenance and Support and why it makes no sense to address this bottom up.

First of all, what is bottom-up in this context. For that we define the stack that an IT system typically consist of. At the very bottom of the stack we have the communications infrastructure. This is the Network. It is somewhat an odd-ball in this discussion, as the Network is not limited to a few systems. But for argument's sake, we start with the Network, the communications layer.
Next we have the hardware layer, this can be physical or virtual. But in case we're talking virtual, it does make sense to separate this layer into two sub-layers. For the purpose of this post, the division is not relevant.
On top of the hardware we have an operating system, and with that the rest of the system infrastructure. Think about anti-virus, system firewalls, etc.
Next we have the middleware, or rather we have software engines. Think in this regard about application servers, database servers, webservers, messaging servers etc. These are not all considered middleware, but they are engines. Generic pieces of software that provide specific services to application specific software.
Which brings me to the next layer, the application software. This is in fact the software that provides the services that the business benefits from.
Finally, there're the business processes in which the applications play a part. As with the communications layer, this is sort of an odd-ball. But within this post it is in fact relevant.

Now looking back at the start of this post, IT services the business. And the business is in fact defined by its processes. Without processes there is no business. Mind that the processes do not necessarily need to be formalized or even repeatable. I'm just saying that the business is defined by how the various actors interact, these interactions are processes, the business processes.
Consequently, when the business can't execute its processes, it doesn't function. Nothing gets doen. When it is impossible to execute the business critical processes, the business seizes to exist.
Thus, from an architect's perspective it becomes critical to understand these processes, or at least their criticality, and the demands regarding the processes to be executable. Hence, and now we're getting close to where I want to be, the architect needs to understand which steps, which actions in the various processes are automated. This is where IT kicks in!

Now you understand that an architect should worry about the availability of the capability to execute a business process and an IT architect, should worry about the automated parts. These are the business services, or in fact the applications. (Yes, I know that I'm simplifying this a bit, but within the context of this post, that is fine.)
So, (parts of) applications need to be available and the data used in these applications need to be secured. I'm saying parts of applications because more and more we see applications to be composed of reasonably disparate parts. What is 'available'? It means that a business service, an application's functionality, is accessible and once is started it is concluded within an acceptable timeframe. This is important, because when the system does execute but not fast enough, it should be considered unavailable.
What does it mean to secure data? Well it means that the data has to be available, can't be tampered with and can only be accessed by those that are supposed to have access.
We typically define these by using KPI's, typically RTO (Recovery Time Objective, i.e. how long till a service is available again), RPO (Recovery Point Objective, how much committed data can be missed) and an up-time defined in an amount of time the service can be unavailable over a period of time.
These are all business requirements, all to be defined by those that can understand what it means when a process can't be executed.

We're getting close to where we need to be, or rather where I want to be with this post. The topic of Maintenance and Support. Once a process is being used, it needs to be supported and maintained. For IT systems, it means the same. These systems need to be supported and maintained as well.
The roles that need to maintain and support are dividable in three areas: Functional, Application and Technical. Basically it's about the people that understand what the system should do, which services it should provide. Functional. The people that understand how the application works and how it can be kept working. Application. The people that understand how it all actually runs on the systems. Technical.

Let's bring in the animal kingdom, shall we; There are different ways to skin a cat. By this I mean that there are different ways to keep a process executing. And that's the whole point! The process needs to be kept executing. This is what maintenance and support is about. Full Stop.
By realizing this, it becomes clear that from a functional perspective it must be defined in what circumstances what needs to be done to keep the process executing. And in the cases IT is needed, it must be defined what needs to be done.
But that's not the point, the point is that those KPI's that are defined, the RTO, RPO, etc are regarding the business service. Explicitly not, and I emphasize this, the application or the infrastructure or the network. This is what needs to be managed. So in order to provide in the requirements regarding this, it must be implemented at the top of the stack. And then you go down the stack realizing these requirements. Just like any other business requirement.

Talking contractual issues, the SLA and OLA, is the exact same issue. The SLA is defined on a business process layer, the SLA defines to what extend the process can execute and then trickles down. Down the stack, to ensure that it can be done. And the OLA's are there to ensure that everybody is on the same page and commits to help meeting the SLA with everything at their disposal.
It should be clear that it is rather pointless to define and agree an SLA on a lower level in the stack when that doesn't support the requirements higher up in the stack. It would be a waste of money as the enterprise will still falter when poop hits the fan.
Again, it's the same as with all other requirements, you don't build something that doesn't help doing business.

Agreed, my post title should've been "A Top-Down approach to Maintenance and Support is the right way" as this is what I'm discussing here. But I choose to be a bit controversial here. Why? Because too often I see that the bottom-up approach is taken. Enterprises consistently fix availability at the infrastructure level. And genuinly believe that this is cutting it. Not realizing that it doesn't. Furthermore, they invest extensively in expensive IT solutions that are typically complex to maintain and support. And because of this, there's rigid standardization enforced to keep costs down.
Consistently, enterprises are solving business problems that are not conceived as business requirements, we actually call them non-functionals most of the time, using technology. Preferably hardware, virtualized.
This is counter-intuitive because any other business requirement is actually addressed top-down.

One of the reasons for this behavior is that 'the business' doesn't think of how important a business process is, how important the automated activities are, how valuable the data is. When asked for the RPO and RTO it is typically conceived to be a technical issue. Typically the answers to RPO and RTO are that they need to be '0', i.e. no downtime and no data loss. Of course this is incorrect in all circumstances, because the RPO and RTO are to be defined on a business service level and in pretty much all cases it needs a significant amount of analysis to come up with the real numbers.

So, yes, the title is correct. Why? Because we keep on doing it the wrong way. We keep on messing up. We keep on spending money where it doesn't need to be spend. And we keep on not delivering what is actually required. Why? Because thinking about it and get to the real answer is actually hard and just throwing more kit at the solution will seem that it will fix the problem.

December 4, 2013

The Secret to the successful Cloud is Commoditization and Democratization and the resulting Governance


It's been a while since the previous post. I've been way too busy with all kinds of projects and initiatives.

One of these concerned a customer of mine that wants to venture into the Cloud. The reason for them is, mainly, reduction of costs. Or rather reduction of IT expenditure.

Upon analyzing the benefits of a Cloud initiative within the four walls of the data center already in use we stumbled across something completely different. A means of saving quite some IT budget without moving to the Cloud. But interestingly enough, the effort would also mean a better paved road to the Cloud.

Okay, done with all the secrecy. Let's lift the veil.

One of the problems my customer is facing is the fact that a lot of their compute resources are dedicated to fulfill specific business needs. Well actually, they're the needs of some people within the company.

As we found out, a lot of time, effort and money are spend on customizations of COTS (Common Of The Shelve) products. And as we went down memory lane, reminiscing, I learned that over the last decade many projects within the company were long term, strategic projects involving the implementation of large products. An ERP here, an HR system there, some BI with the DWH thrown into the mix as well. Some more initiatives like a true IAM setup had been concluded as well. Each of these projects were concluded as a success. Predominantly because they were backed by the senior managers and each of the projects had been implemented as gradual implementations instead of Big Bang implementations. Nevertheless, millions of Euro's had to be spend and are still spend on the resulting systems.

Another key reason why the projects were a success, was that the resulting systems fitted nicely within the business processes already in place, hence people hardly needed to learn a new way of working, so benefits were reaped almost immediately, albeit limited to just some optimizations due to automations. Not really LEAN.

The issue here is that the products, the services if you will, were customized if not already tailored, towards the organization. Although of the shelf, they were no longer suitable to be put back on the shelf. Interesting detail in this regard, is that in pretty much all these cases, the implementation projects were quite costly and took considerable amounts of time.

Also note that in many cases, these projects were actually failures and a lot of time and money was wasted, because of these customizations. I'm sure you've been in these situations or at least heard of them.

The issue here is that the customizability of these products define their usefulness in the enterprise. And their profitability for the vendor, especially its professional services group and its partners.Now with the advent of the Cloud and most notably applicative services provided through the Cloud, SaaS, customizability is the last thing you want. Why? Because economy of scale is what drives SaaS and the more customizations are allowed, the smaller the scale.

You probably noted that this post is not about SaaS perse, but about Cloud. Yet I am treating SaaS here specifically and will delve into the realms of IaaS and PaaS later in this post.

Just for your reference, when I talk about SaaS, PaaS and IaaS, I'm using the definitions of NIST as found in: "The NIST Definition of Cloud Computing". This is important to know, as there are many definitions out there and some are conflicting with the one I'm using in this post.

SaaS - Software as  a Service

So far I hope you'll agree, it is in the SaaS provider's interest to standardize as much as possible in the service offered. Meaning that the functionality provided and the context in which it should be used, typically where in a particular (business) process and how in that process the service should be called. In addition, the SaaS provider will limit the customization of the offered service as much as possible ensuring that diversification is as limited as possible. Why? Because this will trigger the economy of scale aspects of the Cloud that are so beneficiary for the SaaS provider.

And obviously for the SaaS consumer as well. Economy of scale at the provider's end translates into lower costs of the service at the consumer's end. And isn't that what we want when going to SaaS? Yes, it is. But in addition, one thing to remember and take into account as well is the fact that more of the same results, typically more of a high quality. Less is easier to control, maintain and support. And with more consumers it will be more enticing for the provider to keep the consumers happy. Happy customers drive marketshare to a significant extent.

Concluding, standardization of services provided as SaaS is a key aspect of SaaS. There's no way around this. But that means that the service will more and more become a commodity. It has to be, because a commodity is something that is in widespread use, meaning that as a SaaS provider, you have a lot, preferably as many as possible, consumers. Here's your economy of scale again.

So there's your commoditization of the service and with that, you'll lower the cost of the actual service. Up to a point where you can offer it for free for a limited time. Get your consumer hooked on your service. An initial free service, or one with such low entry costs that it's almost free allows everybody and their mother to subscribe to the service if they wish to do so. And there's your democratization.

Back to the SaaS consumer end as the afore mentioned is more concentrating on the provider end. The consumer will be required to use the service as the SaaS provider intended it to be used. There's some room to customize the service to fit within the existing environment, but that's limited. The days for huge customization projects are over. This is not a problem, because the service is considered a commodity, hence it is most likely based around best practices, organically grown from market presence of the service. So the individual users will be happy to adjust their working patterns because the new patterns, if any, will feel as a natural fit. Mind that typically it are not the end users that want the new product to be customized to fit the needs of the organization, it's management. Those that should govern the process, not those that should work the process. With the democratization of the service, it will be those that are willing to change when needed that will consume the service.

As such, the provider and the consumer are benefitting from the commoditization and democratization of the SaaS offering.

IaaS - Infrastructure as a Service

Okay, so let's talk infrastructure. What about infrastructure and how does this relate to commoditization and democratization? Again it's economy of scale that's key.

Cloud offerings are interesting to the provider, including IaaS providers, because of the huge amount of customers they have and the standardized offering they provide and the streamlining they can achieve with their processes because of this. Because of the standardization, they can automate most of their processes. The investments to do this, which are huge most of the time, are warranted because the cost per piece are low. Here's your economy of scale at work, again.

For this to work, the IaaS provider will standardize on the hardware that will be used to build the cloud offering on. Typically this is the hardware of the quality "as cheap as possible". It's the commodity hardware everybody can buy. The off-the-shelve hardware. Again, the cheaper the boxes, the more can be bought. The more are bought, the more customers can be hosted on the infrastructure. The more customers, the lower the costs per customer. And again, here comes the democratization part; lower costs means that more people can start leveraging the cloud. But as with the SaaS offering, the true benefits are in the economy of scale for both the provider and the consumer.
As already pointed out, the provider clearly benefits of the economy of scale, but predominantly because it strengthens the position in the marketplace.

For the consumer, the benefit is in the lower costs and because of the fact that you can buy the service, or rather subscribe to it with a pay per use cost. In most cases there's a monthly fee, to cover the administrative costs for having you as a customer, but beyond that, you pay for what you use.
From a consumer perspective therefore it becomes critical to also be aware of that your chances for customization are limited, extremely limited because you can only get what's on offer. And what's on offer is highly standardized.
There's an added benefit of this, namely the fact that you're required to standardize and therefore you can standardize on your support efforts for those systems that are 'in the Cloud'. Because the service you're subscribing to is IaaS, you will have to support the OS and everything on top of it yourself. By being limited to just a few options, provided by the provider, it will be relatively simple to move to an outsourcing model for your maintenance and support efforts.

The democratization of the infrastructure because of the Cloud is a problematic aspect of the Cloud in case you, as a consumer, are not yet mature enough to have the right governance in place.
To understand this and appreciate this to the fullest, you need to scroll back to the SaaS discussion. The democratization of the infrastructure through the Cloud initiatives all over the globe, means that everybody and their mother can just create a server in the Cloud, provided that they have a credit card. All Cloud providers will provide you with a basic server, limited resources and a default albeit fairly secure, OS installed on it.
Without the proper governance in place, your environment will become uncontrollable as servers will be added to it that are unknown to your support staff, running critical applications before you know it and integration requests will pop-up as you read this blog post. (Also refer to my post on this topic at: Is there a market for the Cloud in the world of Corporates?.) Complexity will not come from an ever increasing level of heterogenity, which shouldn't be an issue at all. This will be limited because of the limitations of the Cloud provider. The complexity will be a result of not knowing that there are servers in your environment. This can only be prevented when there is proper governance. This is where you should use governance to enable and defintitely not to restrict.

PaaS - Platform as a Service

Considering you've already exerted the effort to read until this point, I'm sure you'll appreciate that I'm concluding with the platform as a service by saying that it's just the same as IaaS. But then again, it's not.

PaaS, the platform as a service, is where the Cloud is starting to put some software on your server. True, IaaS did put an OS on the server, but the Cloud provider wouldn't support it. That was yours to do. With PaaS this is not the case at all. You subscribe to a platform so the provider will deliver it. This can be as simple as just the OS, so the service you'll subscribing to is a server with a managed OS on it, or it goes as far as a complete software stack, completely supported and maintained by your PaaS provider. An example of this is Amazon's Elastic Beanstalk (EBS) offering, an offering where you do some clicking and presto you get a server running completely configured for you an Apache Tomcat instance. Fully load-balanced and all.
And here again, you probably guessed it, the economies of scale are key. By controlling the full software stack, the PaaS provider controls what is to be supported, meaning that the PaaS provider dictates the standard. By complying to this standard, as a consumer you benefit from the scale the provider is managing. Thus lower cost. By this model, the platform is commoditized by the PaaS provider and because of the low cost, the platform is democratized.
Now what does this mean. Well for one, if you look at the example of Amazon's EBS offering, it means that a few standard platforms for web-based applications are provided by Amazon as a commodity to its customers. Where in the past, it wasn't really trivial for an enterprise to host a fully scaleable web application facing the public internet as setting up a fully load-balanced environment with all the security constraints in place, based on best practices not only required skilled staff to architect and later on operate, it also required significant investments in order to also be able to provide the necessary resilience. This has now become an almost trivial exercise. You just click on some buttons that you want a Tomcat based environment, or Microsoft IIS based for that matter, and if needed including a database and just a few minutes later you get your environment. All up and running, first year free of charge and ready to roll. With all the scaling out of the box, and totally restricted.

From a PaaS provider perspective it is clear why it is necessary to standardize and restrict. But the understand it is a double edged sword as standardized environments lead to lower TCO (Total Cost of Owndership), but that as a result leads to the ability to commoditize the platform and democratize it.
From a consumer perspective it should be obvious as well that these offerings can only lead to... well that actually depends on who the consumer is. Considering the individual, commoditization and democratization are huge benefits as they result in less effort and budget needed to get the same results faster. It has never been as simple and trivial to deploy a web based application on the internet with full resiliency and scaling capabilities, provided that the application can handle this. From a business perspective, i.e. the business user at the PaaS consumer side, the same holds true. But there's the issue of operationalizing the business services. That can become tricky, but this is where the PaaS part helps. Maintenance and support are handled by the PaaS provider.

There's actually something to be mentioned on this topic, that I will cover in another post, because maintenance and support are far from a given in a Cloud environment.

The serious issue here is with the CIO, or the IT department in general. The standardization is with the PaaS provider. This would be a nice time to read my post on heterogeneity and homogeneity. Basically, what it says is that it is virtually impossible to have a homogeneous environment and it is wasted time, effort and money to strive for this. And although the PaaS provider will only provide a limited set of options when it comes to the P(latform) as a service, the individual consumers can still run havoc with your standardization efforts, because of the commoditization and democratization.
The issue here is that from the perspective of the enterprise as the consumer it goes without question that it should be accepted that the PaaS offering is in fact a commodity and practically everybody can subscribe to it. From an IT department perspective, this should be embraced as that will mean that internal processes and procedures as well as policies are in place to support the virtually effortless integration of PaaS offerings in the 'legacy' environment.

Dare I say it? Yes! Once again, it all boils down to governance. And with a Cloud component in an IT landscape more so. Why? Because the Cloud makes the specialized tailored solutions of yesteryear a commodity of today. Accessible to all. literally all. And therefore, one can't control through restriction, through closing down, but through enablement, i.e. opening up.

What experience learns...

So here's the sad part, really. Most of the enterprises I've been. Most of my clients facing this situation are not yet ready to handle this. They so desperately want to 'do Cloud', but they just don't have the governance in place.
This is typically in the area of architecture (there's no architecture governance), maintenance and support (still focusing on traditional maintenance and support schemes) and running IT in a multi-dimensional silo'ed fashion (strict separation of network, infrastructure, application and business instead of seeing IT as a combination of man and machine solution, strict separation of a project organization and a operations organization resulting in an 'over the wall' mentallity, strict separation of analysis-design-implementation phased approach to new developments).

Well, as usual, I'm really delighted you've come this far and took the time to read this whole post. I understand that it's once again a very long post. Probably loosing coherence every now and then. It took me about a week or two to write it up. Which is never a good thing for a blog post.

As always, drop your comments in case you see things differently. It's the different points of view, from which we learn the most and from which we can complete our picture.