December 27, 2013

The ESB addresses a niche, not mainstream

Other posts in this trilogie:

ESB's don't scale, which shouldn't be a problem because that is by design
The demise of the ESB in a world of webservices

There are many reasons not to have an ESB in your environment. The ESB as we know it has less and less a reason to be implemented in today's world of webservices. There's a full post based on the dimise of the ESB in a world full of webservices.
But this post is about why you should have an ESB and why it does make sense to implement it. Or rather, when it makes sense to implement it.

This is a post in a series about the Enterprise Service Bus, ESB, an acronym so easily to love or to hate. Parts of this post are also found in another post on the dimise of the ESB in a world of webservices.

Remember what the ESB was good for?
  1. Data Exchange
  2. Data Transformation
  3. Data Routing
Well with SOAP being XML based and arguably very wel defined, the Data Transformation is not that relevant anymore. The Data Exchange with SOAP is handled as well, you use HTTP(s) in pretty much all the cases you come across it. Leaves us the Data Routing.
So what was the routing about? Well, it was all about having a logical name resolve to a physical location, where the ESB would determine where to send the message based on an address that one can remember, or that is not tied to a physical location. In the world of HTTP(s), which is dominated by the world of TCP/IP, this is actually handled by the DNS (Domain Naming Service). The DNS can perfectly handle this translation and do the routing. It's what is was designed for. Almost its raison d'etre. Maybe not even almost.
So in a world where we do SOAP based communications, there really is no reason to have an ESB.
What do we keep the ESB around for?

You keep the ESB around for the many-to-many messaging. You keep it around for those cases where you have one or more sender of (similar) messages and one or more recipients of these messages. Because the ESB is a great message dispatcher, it's the message routing part that kicks in. It's something the DNS can't do. In those cases where you want to send the same message to an unspecified amount of recipients where the recipients can register or deregister to receive messages at their leisure you'll want the ESB because managing this at the client side, maintaining a list of recipients and sending each message to each of the registered recipients is bad for performance, susceptible to errors and complicated.
The ESB is great at dispatching the same message to multiple recipients, doing transformations of the data on the fly. So duplicating a production load of messages into a test environment through the ESB, where the ESB is transforming the data just in time before sending it to the test systems (e.g. anonymizing the data, translating from one spoken language to another, transposing id's such that they align with keys in a database etc) is a common practice.
I have worked at a financial institution where we fenced the core banking systems with an ESB, catching every transaction send to the corebanking system and duplicating it to a standby system as well as a test system in a slightly altered form to be able at one side to have resilience at an extremely low price and production load with relevant data in a test environment.

Basically this means that you should keep the ESB around for being able to follow the concept of publish/subscribe. In my opinion the only right concept when it comes to asynchronous communciations. And since this can only be done by either using lower level network protocols like UDP (TIBCO's TIB/Rendezvous was built on UDP and the first real pub/sub message bus), or by having a central dispatcher sending a message to the registered recipients, or by handling it all at the sender side it makes most sense from a security as well as a manageability perspective to do publish/subscribe through a centralized dispatcher. Although for the really lean and mean high performance situations, you want to resolve to lower level network implementations.

Understanding pub/sub and implementing it in the enterprise allows for maximumenablement of the business with minimal IT investments. The afore mentioned use-cases of the monitoring systems where all systems publish monitoring messages (on business level) and various systems subscribe to them, for example a dashboard, and you'll get a very rich BAM (Business Activity Monitoring) solution, which without much additional effort can be used as an application level monitoring solution as well.
The other use-case where production feeds were used to implement improved availability as well as the capability of testing a new release with relevant production loads is of course pretty awesome as it didn't need a lot of additional work once the ESB fence was set up.

So dispite popular believe, there is a reason why you should consider implementing an ESB in your enterprise. But don't adopt it as the default means for communications between applications, instead apply the ESB as a niche product and not as mainstream.

As always, I'm really interested in your views and ideas. More perspectives make for a richer understanding of the topic. So please share your thoughts, stay respectful and be argumentative.


December 23, 2013

ESB's don't scale, which shouldn't be a problem because that is by design

Other posts in this trilogie:
Let's address that scalability issue. First of all, an ESB doesn't scale. That's by design. It was never intended to scale, it was intended to be the pivot in enterprise communications.

This is a post in a series about the Enterprise Service Bus, ESB, an acronym so easily to love or to hate. Parts of this post are also found in another post on the dimise of the ESB in a world of webservices.

ESB's are not buses at all, we only draw them as a bus or rather as a tube to which everything is connected. We draw them the same way we draw TCP/IP networks. But they are actually the hub in a hub-and-spoke model. They have to be, because they are the middleman handling all the Babylonical speak impediments within an enterprise.

An important aspect of the problem of not being scalable is state. ESB's have the crucial design flaw of being able to keep state.

Remember what the ESB was good for?

  1. Data Exchange
  2. Data Transformation
  3. Data Routing

When you talk to an ESB vendor about what the ESB can do for you, the vendor will tell you the three features I've listed before and an important 4th one. Well, the vendor thinks it's important, I believe it's a design flaw. This feature is "Data Enrichment". What the vendor means is that you can send a half-baked message to an ESB and before the ESB is delivering the message to its final destination, the recipient, it will call all kinds of other 'services' to enrich the original message with more information which is needed to deliver the message and have it understood, processable, by the recipient. This means that the ESB needs to keep state, while enriching the message. It also means that the ESB is no longer a mere intelligent data transport that routes and transforms on the fly, but it has become an application, a business application.
Because the ESB is designed to be able to do this, the ESB is designed to be able to keep state. And thus it doesn't scale. It scales as far as the box it is running on can scale. Scalability is all vertically.
There's another problem with the scalability and that is the dependency on physical resources. The queues of an ESB are physical, they're filesystems, databases or something else that's physical and allows for some persistence, or even complete persistence. This means again, that it doesn't scale because it needs coordination of accessing these resources because they guarantee sequence.

When scaling an ESB, it needs to be setup across nodes and there needs to be a vivid dialogue between the nodes about what each node is doing and has been doing, this is pure overhead. The busier the ESB is, the more chatter, ie the more overhead. This will require more nodes which requires more chatter. The result is a very disappointing performance curve.

Don't worry, this is all by design. The design here is that the ESB should be doing more than just be an intelligent data transport, because were it just an intelligent data transport, it would not have any added value in a SOAP ruled WS* compliant world, which is today's world. There's a whole blog post on this.
The ESB is designed to have added functionality to warrant the purchase of its licenses. This added functionality is allowing the sender (or client, or consumer) to not care about the sequence of its messages, because the ESB handles this. But that's a moo point since the client will either do a fire and forget (a feed) and not worry about sequence or it will wait for the response to a request before continuing processing. No added value at all. But the ESB, by queueing also ensures the sequence of requests or messages from several clients. So the recipient (or producer, or server) gets the requests or feed in the order the ESB receives them. Which means nothing because this may not at all be the same sequence the clients did in fact send them. Think about latency of different clients causing the ordering of messages going boinkers all over. Meanwhile this desparate need to sequence requires that there is a single point in the ESB doing all the sequencing. Which means dropping the desire to be scalable.

Ask the ESB vendor again why you should get yourself an ESB after reciting the previous, and your vendor will likely start talking about monitoring. How convenient it is to have the ESB doing all the monitoring for you, keeping track of an audit trail or at least a message trail because it is the central hub through which all the messages flow. Emphasizing the ESB's main flaw again, it being a central hub.

The ESB doesn't scale and it shouldn't be applied in an environment where it is meant to scale beyond just a few boxes. This is why the 'Enterprise' in ESB is misleading. It makes far more sense to have several ESB implementations in the enterprise, serving specific needs. There is a place for the ESB in the enterprise, trust me.

As always, I'm really interested in your views and ideas. More perspectives make for a richer understanding of the topic. So please share your thoughts, stay respectful and be argumentative.


December 5, 2013

A Bottom-Up approach to Maintenance and Support doesn't make any sense

First of all, please note that my view on IT is that it services the user and when used in an enterprise, it services the enterprise. Typically we refer to the enterprise part that IT is servicing as 'The Business'. There is no other raison d'etre for IT other than to service.
With that being said, it should come as no surprise when I say that I consiser IT to be a tool. It's a tool that is supposed to be utilized.

With that out of the way, let's move on to the topic of Maintenance and Support and why it makes no sense to address this bottom up.

First of all, what is bottom-up in this context. For that we define the stack that an IT system typically consist of. At the very bottom of the stack we have the communications infrastructure. This is the Network. It is somewhat an odd-ball in this discussion, as the Network is not limited to a few systems. But for argument's sake, we start with the Network, the communications layer.
Next we have the hardware layer, this can be physical or virtual. But in case we're talking virtual, it does make sense to separate this layer into two sub-layers. For the purpose of this post, the division is not relevant.
On top of the hardware we have an operating system, and with that the rest of the system infrastructure. Think about anti-virus, system firewalls, etc.
Next we have the middleware, or rather we have software engines. Think in this regard about application servers, database servers, webservers, messaging servers etc. These are not all considered middleware, but they are engines. Generic pieces of software that provide specific services to application specific software.
Which brings me to the next layer, the application software. This is in fact the software that provides the services that the business benefits from.
Finally, there're the business processes in which the applications play a part. As with the communications layer, this is sort of an odd-ball. But within this post it is in fact relevant.

Now looking back at the start of this post, IT services the business. And the business is in fact defined by its processes. Without processes there is no business. Mind that the processes do not necessarily need to be formalized or even repeatable. I'm just saying that the business is defined by how the various actors interact, these interactions are processes, the business processes.
Consequently, when the business can't execute its processes, it doesn't function. Nothing gets doen. When it is impossible to execute the business critical processes, the business seizes to exist.
Thus, from an architect's perspective it becomes critical to understand these processes, or at least their criticality, and the demands regarding the processes to be executable. Hence, and now we're getting close to where I want to be, the architect needs to understand which steps, which actions in the various processes are automated. This is where IT kicks in!

Now you understand that an architect should worry about the availability of the capability to execute a business process and an IT architect, should worry about the automated parts. These are the business services, or in fact the applications. (Yes, I know that I'm simplifying this a bit, but within the context of this post, that is fine.)
So, (parts of) applications need to be available and the data used in these applications need to be secured. I'm saying parts of applications because more and more we see applications to be composed of reasonably disparate parts. What is 'available'? It means that a business service, an application's functionality, is accessible and once is started it is concluded within an acceptable timeframe. This is important, because when the system does execute but not fast enough, it should be considered unavailable.
What does it mean to secure data? Well it means that the data has to be available, can't be tampered with and can only be accessed by those that are supposed to have access.
We typically define these by using KPI's, typically RTO (Recovery Time Objective, i.e. how long till a service is available again), RPO (Recovery Point Objective, how much committed data can be missed) and an up-time defined in an amount of time the service can be unavailable over a period of time.
These are all business requirements, all to be defined by those that can understand what it means when a process can't be executed.

We're getting close to where we need to be, or rather where I want to be with this post. The topic of Maintenance and Support. Once a process is being used, it needs to be supported and maintained. For IT systems, it means the same. These systems need to be supported and maintained as well.
The roles that need to maintain and support are dividable in three areas: Functional, Application and Technical. Basically it's about the people that understand what the system should do, which services it should provide. Functional. The people that understand how the application works and how it can be kept working. Application. The people that understand how it all actually runs on the systems. Technical.

Let's bring in the animal kingdom, shall we; There are different ways to skin a cat. By this I mean that there are different ways to keep a process executing. And that's the whole point! The process needs to be kept executing. This is what maintenance and support is about. Full Stop.
By realizing this, it becomes clear that from a functional perspective it must be defined in what circumstances what needs to be done to keep the process executing. And in the cases IT is needed, it must be defined what needs to be done.
But that's not the point, the point is that those KPI's that are defined, the RTO, RPO, etc are regarding the business service. Explicitly not, and I emphasize this, the application or the infrastructure or the network. This is what needs to be managed. So in order to provide in the requirements regarding this, it must be implemented at the top of the stack. And then you go down the stack realizing these requirements. Just like any other business requirement.

Talking contractual issues, the SLA and OLA, is the exact same issue. The SLA is defined on a business process layer, the SLA defines to what extend the process can execute and then trickles down. Down the stack, to ensure that it can be done. And the OLA's are there to ensure that everybody is on the same page and commits to help meeting the SLA with everything at their disposal.
It should be clear that it is rather pointless to define and agree an SLA on a lower level in the stack when that doesn't support the requirements higher up in the stack. It would be a waste of money as the enterprise will still falter when poop hits the fan.
Again, it's the same as with all other requirements, you don't build something that doesn't help doing business.

Agreed, my post title should've been "A Top-Down approach to Maintenance and Support is the right way" as this is what I'm discussing here. But I choose to be a bit controversial here. Why? Because too often I see that the bottom-up approach is taken. Enterprises consistently fix availability at the infrastructure level. And genuinly believe that this is cutting it. Not realizing that it doesn't. Furthermore, they invest extensively in expensive IT solutions that are typically complex to maintain and support. And because of this, there's rigid standardization enforced to keep costs down.
Consistently, enterprises are solving business problems that are not conceived as business requirements, we actually call them non-functionals most of the time, using technology. Preferably hardware, virtualized.
This is counter-intuitive because any other business requirement is actually addressed top-down.

One of the reasons for this behavior is that 'the business' doesn't think of how important a business process is, how important the automated activities are, how valuable the data is. When asked for the RPO and RTO it is typically conceived to be a technical issue. Typically the answers to RPO and RTO are that they need to be '0', i.e. no downtime and no data loss. Of course this is incorrect in all circumstances, because the RPO and RTO are to be defined on a business service level and in pretty much all cases it needs a significant amount of analysis to come up with the real numbers.

So, yes, the title is correct. Why? Because we keep on doing it the wrong way. We keep on messing up. We keep on spending money where it doesn't need to be spend. And we keep on not delivering what is actually required. Why? Because thinking about it and get to the real answer is actually hard and just throwing more kit at the solution will seem that it will fix the problem.

December 4, 2013

The Secret to the successful Cloud is Commoditization and Democratization and the resulting Governance


It's been a while since the previous post. I've been way too busy with all kinds of projects and initiatives.

One of these concerned a customer of mine that wants to venture into the Cloud. The reason for them is, mainly, reduction of costs. Or rather reduction of IT expenditure.

Upon analyzing the benefits of a Cloud initiative within the four walls of the data center already in use we stumbled across something completely different. A means of saving quite some IT budget without moving to the Cloud. But interestingly enough, the effort would also mean a better paved road to the Cloud.

Okay, done with all the secrecy. Let's lift the veil.

One of the problems my customer is facing is the fact that a lot of their compute resources are dedicated to fulfill specific business needs. Well actually, they're the needs of some people within the company.

As we found out, a lot of time, effort and money are spend on customizations of COTS (Common Of The Shelve) products. And as we went down memory lane, reminiscing, I learned that over the last decade many projects within the company were long term, strategic projects involving the implementation of large products. An ERP here, an HR system there, some BI with the DWH thrown into the mix as well. Some more initiatives like a true IAM setup had been concluded as well. Each of these projects were concluded as a success. Predominantly because they were backed by the senior managers and each of the projects had been implemented as gradual implementations instead of Big Bang implementations. Nevertheless, millions of Euro's had to be spend and are still spend on the resulting systems.

Another key reason why the projects were a success, was that the resulting systems fitted nicely within the business processes already in place, hence people hardly needed to learn a new way of working, so benefits were reaped almost immediately, albeit limited to just some optimizations due to automations. Not really LEAN.

The issue here is that the products, the services if you will, were customized if not already tailored, towards the organization. Although of the shelf, they were no longer suitable to be put back on the shelf. Interesting detail in this regard, is that in pretty much all these cases, the implementation projects were quite costly and took considerable amounts of time.

Also note that in many cases, these projects were actually failures and a lot of time and money was wasted, because of these customizations. I'm sure you've been in these situations or at least heard of them.

The issue here is that the customizability of these products define their usefulness in the enterprise. And their profitability for the vendor, especially its professional services group and its partners.Now with the advent of the Cloud and most notably applicative services provided through the Cloud, SaaS, customizability is the last thing you want. Why? Because economy of scale is what drives SaaS and the more customizations are allowed, the smaller the scale.

You probably noted that this post is not about SaaS perse, but about Cloud. Yet I am treating SaaS here specifically and will delve into the realms of IaaS and PaaS later in this post.

Just for your reference, when I talk about SaaS, PaaS and IaaS, I'm using the definitions of NIST as found in: "The NIST Definition of Cloud Computing". This is important to know, as there are many definitions out there and some are conflicting with the one I'm using in this post.

SaaS - Software as  a Service

So far I hope you'll agree, it is in the SaaS provider's interest to standardize as much as possible in the service offered. Meaning that the functionality provided and the context in which it should be used, typically where in a particular (business) process and how in that process the service should be called. In addition, the SaaS provider will limit the customization of the offered service as much as possible ensuring that diversification is as limited as possible. Why? Because this will trigger the economy of scale aspects of the Cloud that are so beneficiary for the SaaS provider.

And obviously for the SaaS consumer as well. Economy of scale at the provider's end translates into lower costs of the service at the consumer's end. And isn't that what we want when going to SaaS? Yes, it is. But in addition, one thing to remember and take into account as well is the fact that more of the same results, typically more of a high quality. Less is easier to control, maintain and support. And with more consumers it will be more enticing for the provider to keep the consumers happy. Happy customers drive marketshare to a significant extent.

Concluding, standardization of services provided as SaaS is a key aspect of SaaS. There's no way around this. But that means that the service will more and more become a commodity. It has to be, because a commodity is something that is in widespread use, meaning that as a SaaS provider, you have a lot, preferably as many as possible, consumers. Here's your economy of scale again.

So there's your commoditization of the service and with that, you'll lower the cost of the actual service. Up to a point where you can offer it for free for a limited time. Get your consumer hooked on your service. An initial free service, or one with such low entry costs that it's almost free allows everybody and their mother to subscribe to the service if they wish to do so. And there's your democratization.

Back to the SaaS consumer end as the afore mentioned is more concentrating on the provider end. The consumer will be required to use the service as the SaaS provider intended it to be used. There's some room to customize the service to fit within the existing environment, but that's limited. The days for huge customization projects are over. This is not a problem, because the service is considered a commodity, hence it is most likely based around best practices, organically grown from market presence of the service. So the individual users will be happy to adjust their working patterns because the new patterns, if any, will feel as a natural fit. Mind that typically it are not the end users that want the new product to be customized to fit the needs of the organization, it's management. Those that should govern the process, not those that should work the process. With the democratization of the service, it will be those that are willing to change when needed that will consume the service.

As such, the provider and the consumer are benefitting from the commoditization and democratization of the SaaS offering.

IaaS - Infrastructure as a Service

Okay, so let's talk infrastructure. What about infrastructure and how does this relate to commoditization and democratization? Again it's economy of scale that's key.

Cloud offerings are interesting to the provider, including IaaS providers, because of the huge amount of customers they have and the standardized offering they provide and the streamlining they can achieve with their processes because of this. Because of the standardization, they can automate most of their processes. The investments to do this, which are huge most of the time, are warranted because the cost per piece are low. Here's your economy of scale at work, again.

For this to work, the IaaS provider will standardize on the hardware that will be used to build the cloud offering on. Typically this is the hardware of the quality "as cheap as possible". It's the commodity hardware everybody can buy. The off-the-shelve hardware. Again, the cheaper the boxes, the more can be bought. The more are bought, the more customers can be hosted on the infrastructure. The more customers, the lower the costs per customer. And again, here comes the democratization part; lower costs means that more people can start leveraging the cloud. But as with the SaaS offering, the true benefits are in the economy of scale for both the provider and the consumer.
As already pointed out, the provider clearly benefits of the economy of scale, but predominantly because it strengthens the position in the marketplace.

For the consumer, the benefit is in the lower costs and because of the fact that you can buy the service, or rather subscribe to it with a pay per use cost. In most cases there's a monthly fee, to cover the administrative costs for having you as a customer, but beyond that, you pay for what you use.
From a consumer perspective therefore it becomes critical to also be aware of that your chances for customization are limited, extremely limited because you can only get what's on offer. And what's on offer is highly standardized.
There's an added benefit of this, namely the fact that you're required to standardize and therefore you can standardize on your support efforts for those systems that are 'in the Cloud'. Because the service you're subscribing to is IaaS, you will have to support the OS and everything on top of it yourself. By being limited to just a few options, provided by the provider, it will be relatively simple to move to an outsourcing model for your maintenance and support efforts.

The democratization of the infrastructure because of the Cloud is a problematic aspect of the Cloud in case you, as a consumer, are not yet mature enough to have the right governance in place.
To understand this and appreciate this to the fullest, you need to scroll back to the SaaS discussion. The democratization of the infrastructure through the Cloud initiatives all over the globe, means that everybody and their mother can just create a server in the Cloud, provided that they have a credit card. All Cloud providers will provide you with a basic server, limited resources and a default albeit fairly secure, OS installed on it.
Without the proper governance in place, your environment will become uncontrollable as servers will be added to it that are unknown to your support staff, running critical applications before you know it and integration requests will pop-up as you read this blog post. (Also refer to my post on this topic at: Is there a market for the Cloud in the world of Corporates?.) Complexity will not come from an ever increasing level of heterogenity, which shouldn't be an issue at all. This will be limited because of the limitations of the Cloud provider. The complexity will be a result of not knowing that there are servers in your environment. This can only be prevented when there is proper governance. This is where you should use governance to enable and defintitely not to restrict.

PaaS - Platform as a Service

Considering you've already exerted the effort to read until this point, I'm sure you'll appreciate that I'm concluding with the platform as a service by saying that it's just the same as IaaS. But then again, it's not.

PaaS, the platform as a service, is where the Cloud is starting to put some software on your server. True, IaaS did put an OS on the server, but the Cloud provider wouldn't support it. That was yours to do. With PaaS this is not the case at all. You subscribe to a platform so the provider will deliver it. This can be as simple as just the OS, so the service you'll subscribing to is a server with a managed OS on it, or it goes as far as a complete software stack, completely supported and maintained by your PaaS provider. An example of this is Amazon's Elastic Beanstalk (EBS) offering, an offering where you do some clicking and presto you get a server running completely configured for you an Apache Tomcat instance. Fully load-balanced and all.
And here again, you probably guessed it, the economies of scale are key. By controlling the full software stack, the PaaS provider controls what is to be supported, meaning that the PaaS provider dictates the standard. By complying to this standard, as a consumer you benefit from the scale the provider is managing. Thus lower cost. By this model, the platform is commoditized by the PaaS provider and because of the low cost, the platform is democratized.
Now what does this mean. Well for one, if you look at the example of Amazon's EBS offering, it means that a few standard platforms for web-based applications are provided by Amazon as a commodity to its customers. Where in the past, it wasn't really trivial for an enterprise to host a fully scaleable web application facing the public internet as setting up a fully load-balanced environment with all the security constraints in place, based on best practices not only required skilled staff to architect and later on operate, it also required significant investments in order to also be able to provide the necessary resilience. This has now become an almost trivial exercise. You just click on some buttons that you want a Tomcat based environment, or Microsoft IIS based for that matter, and if needed including a database and just a few minutes later you get your environment. All up and running, first year free of charge and ready to roll. With all the scaling out of the box, and totally restricted.

From a PaaS provider perspective it is clear why it is necessary to standardize and restrict. But the understand it is a double edged sword as standardized environments lead to lower TCO (Total Cost of Owndership), but that as a result leads to the ability to commoditize the platform and democratize it.
From a consumer perspective it should be obvious as well that these offerings can only lead to... well that actually depends on who the consumer is. Considering the individual, commoditization and democratization are huge benefits as they result in less effort and budget needed to get the same results faster. It has never been as simple and trivial to deploy a web based application on the internet with full resiliency and scaling capabilities, provided that the application can handle this. From a business perspective, i.e. the business user at the PaaS consumer side, the same holds true. But there's the issue of operationalizing the business services. That can become tricky, but this is where the PaaS part helps. Maintenance and support are handled by the PaaS provider.

There's actually something to be mentioned on this topic, that I will cover in another post, because maintenance and support are far from a given in a Cloud environment.

The serious issue here is with the CIO, or the IT department in general. The standardization is with the PaaS provider. This would be a nice time to read my post on heterogeneity and homogeneity. Basically, what it says is that it is virtually impossible to have a homogeneous environment and it is wasted time, effort and money to strive for this. And although the PaaS provider will only provide a limited set of options when it comes to the P(latform) as a service, the individual consumers can still run havoc with your standardization efforts, because of the commoditization and democratization.
The issue here is that from the perspective of the enterprise as the consumer it goes without question that it should be accepted that the PaaS offering is in fact a commodity and practically everybody can subscribe to it. From an IT department perspective, this should be embraced as that will mean that internal processes and procedures as well as policies are in place to support the virtually effortless integration of PaaS offerings in the 'legacy' environment.

Dare I say it? Yes! Once again, it all boils down to governance. And with a Cloud component in an IT landscape more so. Why? Because the Cloud makes the specialized tailored solutions of yesteryear a commodity of today. Accessible to all. literally all. And therefore, one can't control through restriction, through closing down, but through enablement, i.e. opening up.

What experience learns...

So here's the sad part, really. Most of the enterprises I've been. Most of my clients facing this situation are not yet ready to handle this. They so desperately want to 'do Cloud', but they just don't have the governance in place.
This is typically in the area of architecture (there's no architecture governance), maintenance and support (still focusing on traditional maintenance and support schemes) and running IT in a multi-dimensional silo'ed fashion (strict separation of network, infrastructure, application and business instead of seeing IT as a combination of man and machine solution, strict separation of a project organization and a operations organization resulting in an 'over the wall' mentallity, strict separation of analysis-design-implementation phased approach to new developments).

Well, as usual, I'm really delighted you've come this far and took the time to read this whole post. I understand that it's once again a very long post. Probably loosing coherence every now and then. It took me about a week or two to write it up. Which is never a good thing for a blog post.

As always, drop your comments in case you see things differently. It's the different points of view, from which we learn the most and from which we can complete our picture.


October 9, 2013

Oh No!!! Our environment is homogeneous, we are doomed!!

Abstract: What you want, or rather what you need to be successful is a heterogeneous environment. There is no place for homogeneity in today's enterprises.


Lately I've been involved at a number of discussions regarding application replatforming. The main reason for the replatforming is to make the environment more homogeneous. The reason for this is that there is an overall perception that this will allow for a lower TCO of the complete IT landscape. Which is, or seems to be, the goal of many architecture initiatives lately.

I, unfortunately, have to agree with this. To a point. And yes, no typo there, I do state that it is unfortunate. I'll get back to that later.

I agree to a point, because homogeneity is the result of standardization and when we talk about a homogeneous IT landscape we mostly think about far reaching standardization of the IT assets. Typically the infrastructure, operating systems, middleware, database (I don't consider databases to be middleware), and even business solutions like CRM, ERP and the likes.
The more you can standardize here, the more of the same you'll get in your environment and therefore the more benefits you'll get from the economy of scale as a result. As such, economy of scale results in lower TCO as typically you'll be able to do more with less, especially in the area of support and maintenance, i.e. the operation. Add to this some form of virtualization and you'll be utilizing your resources, computing resources that is, more efficiently, meaning less hardware investments.
There's another benefit here, that has less to do with economy of scale (and arguably the TCO) and more with standardization and therefore quality. This is the fact that when you standardize on your assets, you can also standardize on your processes to maintain them once you've reached a high level of quality and more importantly, the people you have to support your environment need less (diverse) training and can achieve a higher level of quality by experience. Ask any SysAdmin how long it took until they felt really comfortable with supporting an OS, and they'll mention years of experience.
Take it up a level and you're in the realm of middleware and databases and this is where you can really lower the TCO by closing strategic enterprise license deals with middleware and database vendors.

So where am I getting at you might ask yourself. Well, that's easy. All the above is why many if not most of the enterprises and SMB's for that matter, do standardize and do want to get to a homogeneous IT environment. I've been there myself as well. Evangelizing the mantra of homogeneous, standardized IT environments.

The issue the issue here is that we do all this work out of reducing cost. And the reason for that is to improve profitability. And this is because IT is most often seen as a cost-center within and enterprise just like staff, so IT reduction, just like staff reduction is something you do when times are challenging and profit is hard to be made. This is because the accountants dictate this. Look at the books, where are the most costs on the balance sheet, cut those and you're done. narrow sighted.
What we tend to forget is that IT, just like staff, is an enabler. An enabler of revenue and therefore an enabler of profit.

Let's think about that for a second, forget about TCO for convenience because I am aware of the 80/20 rule when it comes to IT expenditure. So instead of thinking of IT being a cost-center, consider it to be a business enabler. Ignore the screamings of accountants and listen to the mantra of the business people. Strangely enough, they hardly ever want to cut costs. They'll tell you to reduce costs because that's what the accountants say, what the CFO dictates, what the shareholders say, not because they think that's the right course of action. Really, they don't. I'm 110% certain of that because otherwise they wouldn't ask for more new business (projects) to be realised and instead would order IT to execute projects to reduce IT expenditure.

So those business people in the enterprise want new business projects, pursue new business ventures, address new markets, jump on band wagons. It's all "spend, spend, spend". Why? Because they want more revenue, because that means more opportunity to improve profit numbers on the balance sheet and it is more fun and it is more long term.
IT therefore should not focus on being less costly, instead IT should focus on being more business enabling. And trust me when I say that a homogeneous IT landscape is less enabling than a heterogeneous IT landscape.

Do you get my 'unfortunate' at this point in the post?

Let me elaborate. First of all, heterogeneity is evil, unless you not only accept it (because it is inevitable!), but also embrace it. Choose heterogeneity as an architecture principle. Assume that nothing in your environment is standard. Design your solutions based on this, define your strategy based on that realisation. Unlike other assumptions that make an ass of you and me, this is a safe one to make. It's safe to assume that nothing is standard forever.
Thing is, everything is always changing, everything is always in motion so nothing can be standardized without the need to revisit your standards and as such what once was standard will no longer be. Hence, everything is or will be legacy. In the near future. An old saying in IT operations is:

Legacy is what we have in Production.

Not feasible? Not doable? Not wanted? Well, have you ever seen a fully standardised, homogeneous environment? I'll give you some slack and only ask you this question to be answered for only the OS level.
Forget about the layers underneath and forget about the layers on top of the OS. It's just not there, there are always different versions of the standard OS. And no, they're not the same because if that would be true, there wouldn't be the need for a difference.

Okay, so adopt heterogeneity and embrace it as a child that is always misbehaving but still it is your child so you love it, no matter what. But you can't wait until she's 18 and on her own. (Yup, I've blogged about off-spring and the need to be able to let go in a previous post)
This means that you need to learn how to live with it. Your processes of supporting and maintaining that IT environment need to allow for heterogeneity.

Why was that again? Uhm, I don't think I covered that, but if you still can't guess, think back of the IT being more enabling.
The trick here is that you need to understand that the IT is servicing the business, which means more revenue needs to be generated as part of the IT endeavors. So if the business wants something, IT needs to facilitate it. Without question. IT answers, it doesn't question. It also answers truthfully. And yes, this not only implies, but explies (if that were a word) that the environment is heterogeneous and will become more and more heterogeneous as time moves on. Which is awesome, because every request of business is delivered.
It's going to be a mess, unless you embrace heterogeneity. When you don't, when you require homogeneity, demand it in fact. Question the requirements of the business, actually be so arrogant as to know better what the business wants than the business itself, than you will not be able to deliver on every single request of the business. You'll be less, probably a lot less, than adequate and you'll be less of an enabler.

What you want, or rather what you need, in order to be successful is a heterogeneous environment.

There is no place for homogeneity in today's enterprises.

Really? No, not really. We're almost there... You can only do this when you do architecture within your company. I mean define conceptually how your environment looks like, model it. Remember it is only a model! Ensure that you have the right governance in place to apply this model. Govern.
Realize that in the above, I never stated that this is related to IT, it is related to the enterprise, however small it is. Also realize that nothing in there is fixed, in fact, all is subject to change. So don't consider the model as dogma (see a previous post on the topic of dogma), don't lay down a doctrine.
Make sure your conceptual model is backed by a logical model which is governed. Again, there's nothing specific IT in there. It's the enterprise again that is modeled on a logical level. Also remember that logical means that the model is logic to all stakeholders.
Easier said than done? Well, I would say, easier done than said as long as you do architecture in the right order. Meaning that business requirements dictate architecture dictates technology dictates tools dictates implementation. Most of the times we do it the other way around... that's why you end up with easier said than done, as that is also the other way around.

Ah, yes, there's the CFO and the shareholders who are kind of important in the eyes of many, and in any case they have that pile of money you want a piece of. And they want cost reduction, maintaining revenue, increase profits... This is where you want to standardize, but in an abstract way. Which means, move those parts of your heterogeneous IT landscape that are costly out, away from being your responsibility. In other words, where it makes sense, move your IT to the cloud. Again, don't make this doctrine, but move those parts that make sense to you to the cloud (read my position on the cloud with respect to enterprises in a previous post). These are the systems that you need elasticity in some form to be achieved, where you need certain facilities (security related, availability related, connectivity related) to be at a level where you just can't handle it yourself. This won't be an issue as long as you've been accepting and embracing heterogeneity. Because from that perspective, the cloud is just another something non-standard.
By moving to the cloud, you can actually benefit from the economy of scale... the scale of somebody else. And wasn't this the proposition of making things homogeneous? Of defining standards and standardize? As it is with cloud, it only works with the cloud provider because he standardizes. Not so much on getting a homogeneous environment but on the way you perceive that environment. And since you're not (completely) free about what the environment looks like (with IaaS more than with PaaS more than with SaaS more than with BaaS), you're forced to think about how things work together. About how the applications that meet your business needs provide these and enable your business to increase revenue. And with the limitations of the freedom you have on this, you're bound to standardize on a conceptual level.

So in short:
- Homogeneity -> IT is a cost-center
- Heterogeneity -> IT is an enabler

I hope this post has made sense, or at least has provided to you a different view on standardization in IT. As always, you're more than welcome to provide your insights. The more views, the more perspectives, the more complete the picture.

Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link of my blog to all your Whatsapp friends and everybody in your contact-list. But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.


The text very explicitly communicates my own personal views, experiences and practices. Any similarities with the views, experiences and practices of any of my previous or current clients, customers or employers are strictly coincidental. This post is therefore my own, and I am the sole author of it and am the sole copyright holder of it.

September 30, 2013

5 Reasons why TOGAF Certification could be a waste of your time

Okay, first of all: TOGAF is an acronym and I have strong feelings about acronyms. See my previous post on this Dogma of Acronyms.

With that out of the way, I also want to make clear that I am a great proponent of architecture and I think that understanding, that is Understanding, of the principles behind TOGAF and how to apply TOGAF within an enterprise environment is very useful.

It's just that certification is a waste of time. And in this post I will provide 5 reasons in an arbitrary order, why it's a waste of time. And because the order is arbitrary, I won't number the reasons so you'll have to count them yourself.
  • Enterprise Architecture is an activity and todo it right, to master this activity, takes practice. As the proverb goes; Practice Makes Perfect. Certification is a hoax from this perspective, since it doesn't require you to be an experienced architect that knows the specifics of TOGAF. It requires you only to know the specifics of TOGAF. Since the role of the architect is the most senior role in IT, in my humble opinion and many HR managers agree with me, being certified should take this into account.
  • TOGAF covers to broad an area to be applied within an enterprise. TOGAF covers the complete enterprise and although I completely agree with this coverage, leading from business aspects all the way down to IT through Organization etc, this is hardly applicable in an enterprise. Typically TOGAF is done by IT people. Huh? Yes, by IT people, and this is due to the fact that the role of architect is typically an IT role. And on top of that, it was invented because there were too many senior IT people that sucked as managers. So in order to get them higher pay grades they became architects. And yes, there's the irony, architect are supposed to manage projects and programs on a technology level. Uhm, uh... what's happening here?
  • TOGAF is based on best practices, models and notations. So it means absolutely nothing unless you fully understand your enterprise, its current state, where it wants to go and so on. Just like ITIL, being certified has no value to the company you're working at, what has value is you understanding what's good for the enterprise and what's not. Which means that you need to be engaged not certified. 
  • Being an architect means you need to be able to communicate, and more importantly, you need to be a sales person. Being a successful architect means that you can sell an idea and in contrast to many sales people, also need to ensure delivery once you've sold it. As the architect is the most pro-active techie in the enterprise, having a long term view and a solid idea as to how to get there with steps that need to be taken immediately, you have to get that mindset. No TOGAF course teaches you how to become a great sales person, so don't think you'll be a certified sales person either.
  • Everybody who wants to be an architect and has some time available gets TOGAF certified these days. Darn, I was thinking about it myself when it seemed as if my contract wouldn't be extended. Very similar to the Java certifications in 2003/2004 when the IT job-market was down in Europe. Unlike that time, I see that at interviews CV's of those that are TOGAF certified are scrutinized for experience that matches the claim of being knowledgeable because certified. I think that having done the training but not getting certified is better than being certified as it shows that you got the theorie, but don't claim the experience.
Unfortunately not many HR managers understand this and allow IT managers to send their staff en-masse to TOGAF training. Staff being certified are returning to enterprises that are not ready for enterprise architecture. Hard cash spend on training but no cash spend on following this through. I think for any company it is better to decide on a strategy and follow up on it from the various manager levels and support it by sending the right people to the right training.
In  my humble opinion, Enterprise Architecture and consequently all methodologies related to it is too mutch of the good stuff for many enterprises. For most of the enterprises I encountered. It's a maturity thing and as with us mere humans having to go through infancy, childhood, puberty before becoming adults, if at all. The same goes for organizations.

So dear HR manager, think first if your organization is actually mature enough and needs Enterprise Architects before you send your best resources to training and certification.
It is my experience that you should send your staff to training to improve your enterprise, and to certification to improve your staff. And at all times, you send them to training and certification if you choose so, that is relevant... in the short, mid and long term.

As always, thanks for reading this post and please comment when you see it differently. Diverse opinions make for clearer perspectives.


September 20, 2013

BYOD - Bring Your Own Disaster

Just to be clear, it's not a typo.
First of all, let me explain something. Over the past year or three I have heard a lot about BYOD and in most cases the acronym stood for Bring Your Own Device. And although I've been bringing my own device since very long, since about 2003 at several customers it has always been a problem to actually use my own device.
So yes, we've been able to bring our own devices, but actually using the device at our job doesn't really go beyond reading your email and accessing your calendar. Granted, sharing contacts between your device and the corporate is there as well.
I've had discussions with many subject matter experts from the leading software vendors regarding this topic (IBM, HP, CISCO, Microsoft), and interestingly enough the struck me that although they all have their tools ready to be sold to their customers to facilitate a BYOD policy, within their own ranks their policy was in all case more a matter of CYOD, or Choose Your Own Device.
Okay, so let's see what we're talking about here. Within the corporate environment there are basically 3 models possible (mind these are models):
  1. Everything is provided by the enterprise. So you get your IT equipment from your employer and he takes care of everything, including payment of the hardware and its maintenance. You work in the office and that's where you keep your stuff. When your employer decides that you should be able to work from home, you get a laptop and you can work offline from home. Or in fact you can log into the corporate network using a VPN so you can access your files and maybe even your applications, although they're in most cases installed on your laptop. 

  2. You get all the computer equipment you need and your employer handles everything. But because laptops are more expensive than desktops and because laptops are also harder to maintain, because you never know when they're available for maintenance, you're allowed to work from home on your own personal desktop and in case you've got a laptop... well that's even more practical as you can work from any location from which you can setup a VPN connection to the corporate network. Most likely you've installed some software yourself and by means of a USB drive or some other external storage. Hey, maybe even dropbox! You just have to make sure that you're running some OS that is supported by the companies admins and meanwhile you keep the anti-virus and other such software on your system.

  3. Then as a last model, there's the option to use whatever device you have to connect to the corporate network and do whatever it is you do to accomplish your tasks. You connect to your employer's network through the internet, either via a wired, a wireless or a mobile connection. (Yes, I know that a mobile connection is also wireless). In this last model, you bring your own device and not so much your own computer. There's a big difference.

  4. So now what's the relevance of all this. Well mainly it's got to do with risk. There are a number of different risks involved here that are less relevant in model 1, a bit more so in model 2 and significant in model 3.
    So let's cut to the chase and do some business here. As we want to get onto the Disaster part of BYOD.
    One of the more obvious risks of course in these models is the risk of data leakage. Both intended and unintended. With the topic of data leakage you're also bound to include leakage prevention and that is widely considered a security aspect of an IT environment. Apart from technology that can be applied, there is also a lot of organizationals aspects to this. I'm not getting into this. Not in this post.
    The thing about Data Leakage Prevention from a technology perspective is all about control. Mainly control over where the data is stored and where data is communicated from and to. Basically, if you control where it is stored and you have full control over this location, you can just prevent anybody from accessing it and there's no risk of leakage... in an ideal world, I know. It also means that you control when it is accessed and if you control the communication lines, you also control what is communicated. Basically this is the strength of model 1. In model 2, you have less control over where it is stored and when and how data is communicated, but there are still plenty technologies and solutions available to apply sufficient control at an acceptable cost to prevent, to a degree, data leakage. A weak link in this chain is the fact that it is fairly simple to loose a USB drive and with that data. Whether intentionally or not.
    It gets a bit more interesting when working on your own device. Yes you can loose that tablet or smartphone, but there's another thread that's more significant on your own device than it is on your own computer and that's the thread of malware.
    The interesting part of devices is that they are technically not really mature. New devices are popping up every few months, new operating systems, new capabilities etc etc. And in addition to this, their storage is almost limitless, by that it is very limited. So most people that use devices also use cloud solutions for storing their data, in many cases the corporate's data.
    But this is all just a matter of potential compromises and risk calculations. Yu accept the risk, there's no problem. But most companies can't handle certain data to get out on the street, or at least that's what they think.
    Like I said, it's all risk based and security threads should be looked at from a risk perspective. Looked at from the perspective of what are the chances that it happens and what is lost when it happens. This requires a certain level of organizational maturity. A movement that has just started, hence not many organizations are ready to handle security in this way. Oh yes, they may have completely adopted the risk-base approach to security, but they're far from having classified all the threats and defined the proper counter measures to avert the threats or mitigate the risks. Thus 'Lock down' is still what happens, until we're ready, is what's being said.
    Yup data leakage is a problem and with your own devices this is even more of a problem but predominantly because we want to lock our data down because we can't act on the security risks from a threat-classification perspective. So we put a lock on the data, which means we put a lock on the devices. And since it's your own device, your boss is putting a lock on your device... in vain in most cases as either the technology is not adequate or is not yet developed for your device. Ah, let's not forget about the variety of all devices. All the different PC operating systems, the various mobile platforms, the various versions of each platform, the capabilities of devices and the variety of ways they can connect.
    The disaster kicks in when the CSO (Chief Security Officer) and her underlings are confident they're on top of it. Because they never are. And at the first breach of their defenses, typically moments after a device is brought into the organization, you've Brought Your Own Disaster. Mind that the breach doesn't have to be a disaster in itself, but a breach of security in an environment where threats are dealt with in throwing technology at them and lock things down, ... well you get the drift.
    Data Leakage is interesting, but there's more to the disasters that can happen by Bringing Your Own Device.
    Think about compliance. Who's going to pay for all the licenses of the software installed on your devices? And who is going to ensure that you work with the right versions or even applications? Even though there are open formats doesn't mean that they're completely embraced by enterprises. In fact, I know way more organizations that keep all files in proprietary formats like Microsoft Office formats, the old ones, instead of the open XML based formats. And this is just an example of programs for which there are open formats that are supported by free (as in free beer) programs.
    Since it's your own device, you're supposed to get the licenses yourself in many cases. And why would you pay for them where you can get them for free (as in illegal downloading free) from the Piratebay? Btw, I don't condone piracy.
    Yet, since you use those programs for your work and when in the office even within the corporate's network, there's a compliance issue.
    Again, people responsible for the various security aspects in the enterprise, including compliance aspects want to take the risk-based approach to this, but until this is in place, they revert to technology to ensure that you won't have any non-compliannt, read pirated, software on your device. There's an interesting aspect to this; Typically there's a policy in place that's supposed to prevent you from putting pirated software on your device. This policy is defined in terms of technical counter measures based on some tool that will probe your device and delete everything it deems not in accordance with the policy. Of course this can only work when you device supports this, in every aspect, including you allowing it to support this.
    Again, by putting pirated software in the office by means of bringing your own device into the office and use it to do your job, disaster has been brought to the office. True, this is also not always resulting in your employer's bankruptcy or something even more serious.
    Actually, to be honest, the Disaster in Bring Your Own Disaster is not referring to the problems you're causing with your own device, it's the disaster of managing these devices, which is something that is just impossible at this point in time with technology, considering the multitude of devices, types of devices, operating systems, platforms, capabilities etc. It can only be handled by having implemented risked-based security. Which is something that requires a lot of organizational changes, and change in mindset as well.
    This is why most enterprises are actually reverting to a Choose Your Own Device strategy. Out of a pre-selected and manageable set of devices ranging from phones to tablets to desktops and laptops, you're allowed to pick the ones you want to use, as an extension of the office.
    Thanks for reading, and as always, please let me know your thoughts about this topic.

September 18, 2013

The Awesomeness of the Process


It's been a while, but here's another blog post on IT Architecture and anything that comes close.
I remembered today while waiting on the train to go home, after a meeting in which we, a bunch of architects, came to the exact same conclusion as we've done several times in the past two or three weeks. Namely, it's all about the process if you want to do it right. Check my earlier posts and you'll find out that I actually think that the process is significant.

Anyways, I was waiting for the train and while reiterating what was said during that meeting and afterwards made me remember what one of my professors told us in either the 2nd or 3rd class on databases, or rather on Information Analysis:

"Always remember that you should only store in a database that what you want to get out".

Because we used NIAM as the modeling technology and this was in the earlier nineties, this made perfect sense because in those days reporting systems were the most important IT systems with SAP and BAAN dominating the ERP arena (I know I'm not doing any justice to either one of them) and NIAM is extremely about reporting. (Again, I know I'm not doing it any justice by stating it like this.)
Okay, so lets get back to our meeting today, well actually, let's get back to my opinion of processes. I'm honestly a big fan of Business Process Management, mainly because they define a business and they make IT worth our while... unless you love playing games because IT always makes computer games possible.

The main part about BPM are the processes. The interesting thing with processes is that they, when defined correctly do exactly define what the sequence of activities is to get something done, who is responsible for each single activity and who is accountable. And, and this is where it gets interesting in the context of this post, what is needed to start and do the activity and what the outcome is.
The beauty of this is that it allows you, if you keep this in mind, to ensure that any actor for any activity only gets the information she needs to perform her task. Especially in those activities where somebody needs to review, accept or verify something this is extremely valuable as it ensures that she is not overwhelmed with large complex documents.

I have worked in many organizations around the globe in a multitude of different verticals and markets. Always there was some sort of process in which one or more entities within the organization had to accept either a document, an architecture or some code if not an application that was to be put in production. The hardest part in these processes was to convince people that it was perfectly fine for them to receive more information than they needed and that you were relying on their expertise to filter out the information they were looking for. This because the document they were handed was a living document and as the process went on, it would grow and mature.

Of course this is complete bullshit, living documents don't exist and since they're written, or supposed to be written by a variety of people with different views and backgrounds the quality is poor at the best in most cases. But there're many reasons why we, including myself, are settling for the one big document. For one, it always does seem to be convenient. Just one document to be maintained, one template to be developed, one artifact to be communicated.

Then there is the reason that we're all very poor in defining requirements, even requirement engineers, so we find it very hard to define what we exactly need to do our job, how we need that delivered and in what format. So the document that holds everything is very suitable.

Working on a process and defining all activities,mother responsible and accountable actors as well as the input and output for every activity can be very boring or at least seem to be very academic. A paper exercise, and a since we all love the tangible, well all the sane people I know, we are easy to digress and move to the concrete with documents and diagrams. Letting go of the process and concentrate on the artifacts. Remember, we tend to fix problems in the infrastructure instead of the application, mainly because we can touch it and hold it and it is easy to budget and plan. Not because that's the best place to fix the problem.

This is more so for the managers of the departments that are impacted by such a process. When they're not aligned and don't define the areas of their responsibility and accountability, preferably without overlap and without gaps. Yup there's a challenge. It will be very hard to point out who is acting in what capacity during which activity.
Management buy-in is a prerequisite even to start working on the process. And with proper alignment of the management, it will become a simple as life exercise to define the process.

There is the problem of the man documents to be managed, or rather the many products to be managed. Some are input and some are output. But when you keep in mind that output always has to be input for some activity, you can state that there is always only input to be considered. But that brings with it, that you need a rock solid system for managing all these products. And no, a file-server or email system doesn't cut it. What you need, at the very very least is a collaboration platform that has document management in it, but preferably you have a Business Process engine that manages your process. In this context I don't mean a document management system that allows you to define a document flow. No sir, I mean a process engine that allows to attach products to the process.

Now forget about the bureaucratic formalities of a business process and documents and templates and responsibilities and accountability and actors and all that stuff. Instead think about your job. Your role in the organization you work for and the activities you're supposed to do. Wouldn't it be awesome if you would be provided with all the information and all as soon as somebody would ask you to do something? A situation in which you would know exactly what information you're supposed to dig up yourself to do your job and what information you need handed to you? At all times?
Well if your answer is "YES" than you want to experience the awesomeness of the process, because this is exactly what it is about.

And it's not just a hollow promise. No, it's something that comes out of the box when you define the process not based on the information that can be provided, but what has to be provided. When you ask the responsible person what he needs to complete an activity instead of just throw all you have at him and have her figuring it out herself.

One last thought on this topic before I conclude; You never define a process that is perfect, you perfect a process that is defined. So it is correct to think seriously about what the process should be and than adjust it as you apply it. Basically the idea behind Lean.

I hope you enjoyed reading this post and think you can benefit from it. Don't hesitate to comment. As always, different views paint a clearer picture.


August 8, 2013

Is there a market for the Cloud in the world of Corporates?

(Disclaimer: This is by no means a "definitive guide")


I really think that most corporates are on an enterprise level not ready or even suitable to embrace the Cloud. It's the SMB that should embrace the Cloud as it would benefit significantly from Cloud offerings. Why? Because the cloud cannot be part of a policy, it should not be a strategy. Furthermore, corporates tend to only centralize infrastructure architecture and decentralize application architecture and the Cloud is driven by application architecture and business architecture and not (so much) by infrastructure architecture. In fact, it requires you to abandon control over your infrastructure and hand it over to your legal department.

Okay, I admit it, I've been an Amazon fan since I was ordering books and CD's for the whole department from, had them shipped to the office in the Netherlands at least on a weekly basis.
My hero in the corporate market? Jeff Bezos, the man behind My biggest complaint on Amazon? That they haven't yet opened an store so I can purchase a Kindle Paperwhite for my wife and two sons and save some trees by getting them books in Dutch all digital. I sooooo badly want a Paperwhite myself, to replace my Kindle-with-keyboard, I sent my kids to English class so they could read books in English and I could give them my Kindle and buy the Paperwhite, with 3G of course.
Amazon has always been this very customer centric for-profit company, disrupting the market, any market by a single strategy: improve the ways consumers can spend their money at Amazon.

But this post is not about how awesome Amazon is, it is about whether or not the Cloud is for corporates.

First of all, depending on who you are, your response by now could be a whole heartedly and fully sincere "YES!". In which case you're probably a sales manager or similar person of one of the many companies that sell cloud for the enterprise services.

Second of all, well let me first cover the SMB market;
The SMB market is probably the single one market that benefits most of SaaS, Software as a Service, which is sort of a rebranding of the almost complete failure ASP offerings were. Being a company in the SMB market, you're too small to host many of the software you can get as a service yourself and by choosing the SaaS offering you can benefit from their economy of scale. It totally makes sense.
The interesting part of SaaS is that it is way less a matter of elasticity, which is commonly associated with the cloud (and rightfully so), but the emphasis is on self provisioning. As an SMB you would get services like Office365, or the Google Apps offerings. All SaaS offerings. Things like IaaS and PaaS are not for you, unless your raison d'etre is a service based on online accessible services. Something called in the early days as a dotcom.
The Cloud is, for an SMB, interesting because the cost model is one based on use and not ownership. Your cloud provider meters your consumption and you pay what you consume. Your costs increase when your use increases and typically that means when you're growing as a business. The model is very fair.
So back to the second of all...

Second of all, the cloud doesn't really exist, it's a substitute word for different offerings, so actually the cloud might be there for corporates but what is the cloud anyway. Generally acceptable as a definition is the one provided by NIST. It also has a nice definition on IaaS, PaaS and SaaS.


First SaaS as I already covered some SaaS when discussing SMB's just a few sentences earlier. SaaS is more about economies of scales to reduce costs related to the use of standard(izable) software offerings. For SMB's this is usually a small step as their processes are not that formalized, or at least they're typically rather flexible when it comes to their processes. Because in many ways, SaaS offerings are based on usage based on a best-practice.
Corporates are notorious when it comes to processes and adherence to them. From experience I know that the larger the corporate the more effort it takes to change a process and more importantly, the smaller the chance that their processes are according to best practices. So in this perspective, SaaS can be very cumbersome to implement because it would require a change in the processes of the corporate. Something about teaching old dogs new tricks springs to mind. Although it can be done, read "Who Says Elephants Can't Dance?" by Louis V. Gerstner, not dogs but elephants.


So let's continue at the bottom, IaaS, Infrastructure as a Service. Basically what you get is a server, nothing on it, or maybe an OS. You deal with it. There's not a lot more to it and as a corporate you should already be familiar with this. It's what you've done for millennia, you bought new hardware with or without an OS installed and you took it from there. Whole IT departments are in place and ready to support this server. Typically the server is one of a really lot and over time your IT Operations staff has a number of very smart system admins.
The big benefit of IaaS is that you don't have to wait for the delivery of the hardware provided you drink sufficient glasses of water as the server can and should be provisioned during the period that you get up from behind your desk, walk over to the water cooler and get yourself a glass of water. From experience I know that it can take slightly longer when you're not yet a customer with the Cloud provider because you'll need to sign up as a customer. This is not a joke, with the current leading Cloud Service providers, this is the process and the timeframe during which the new server is provided. Will it be a physical or a virtual server? Well you don't know and you shouldn't care. It is the server that you want, with as many CPU cores and memory and bandwidth as you need. It's all provisioned from a seemingly unlimited pool of servers and CPU's and memory and bandwidth. Yup, there's some real magic going on, because the Cloud provider just keeps on providing servers to you, never asking you to stop asking for more.

Once you've got your sever, you're left at your own devices. The Cloud provider will not maintain that server at all. He will just ensure that the server is provided to you at all times and when it crashes that it will be available to you within the time you agreed in the SLA.
Did I mention that you need to take care of everything yourself and that the Cloud provider will not manage your server? It's not the service he provides when talking about IaaS, you are getting Infrastructure as a Service.

So why would you go for IaaS? Why would you go for infrastructure as a service? Well, just like you would go for any other technology; it solves your problem or it can sufficiently contribute to the solution to your problem. Most of the time you don't want to go for IaaS if there's a PaaS solution for you. Read on for the stuff about PaaS.

Onwards, with the things you can get as a service. PaaS, Platform as a Service, takes you up the software stack.


So on top of the infrastructure, there is an Operating System and then some. Basically the 'Platform' in PaaS refers to the software layer on top of which an application runs. A lot of people I know are thinking 'Application Server' or 'Middleware' when talking about Platforms, but actually some applications run on the Operating System itself, so the OS might be the platform in PaaS.
Here's a rule of thumb that can help you: a Platform as meant in PaaS is any piece of software that is not a business application (take the term very broad) and is managed by the Cloud provider. Please take not of that last bit!
So in PaaS, the platform is managed by the Cloud provider, just like in IaaS the infrastructure is managed by the Cloud provider. Infrastructure is (virtualized) hardware, Platform is software. Is this arguably the correct definition? Yes, we can argue about it, but it fits the NIST definition, which is convenient.

Oh yeah, don't forget that the Platform in PaaS is a software layer on top of which an application runs. Typically it provides technical services to the business application. This means that typically a DBMS is not a Platform.
An Application Server like a JavaEE container, is a Platform.

Interestingly, the application server in the previous paragraph can be part of the PaaS service, so it is managed by the Cloud provider, or the OS on which it is running is the Platform and managed by the Cloud provider but the application server is not. The key here is what is and what is not managed and it always moves up the stack. So you can't have a PaaS contract where the Platform is an application server that is managed, but the OS is not managed by the Cloud provider. Anyone tells you this is possible, well question their expertise on Cloud, like really hard.

Typically you would want to go for PaaS, because that means that there's a lot managed by the Cloud provider and you only need to manage the contract with him including the SLA's you agreed upon. The Cloud provider will ensure that you've got your Platform and that it complies with the capacity you agreed upon, that it stays available and in case it becomes unavailable, that it will become available within the timeframe agreed upon in the relevant SLA.
You also want to move to PaaS if possible because it means that you'll be drawing on the fact that the Cloud provider has staff that can actually support the platform 24x7 so you won't need that expertise in your organization in the amounts of being able to support it 24x7.

Like with IaaS, a PaaS environment is provisioned quickly. A matter of minutes more likely than days or even weeks and months. And with a shear unlimited amount of resources at your disposal, so click away and get provisioned.

IaaS instead of PaaS

So why would you go for IaaS instead of PaaS, or maybe go for PaaS but only for the OS and not the application server, say? Well that is because you need something that's not standard.
Especially with PaaS, you will have to comply with standards. And these are not your standards at all, these are the Cloud provider's standards. And you're required to stick with them and not deviate from them because that's just not possible because you have nothing to say about the service except for the quality you're getting. The platform is managed by the Cloud provider and not you, remember.
So when you're in need, let's say, for a clustered WebSphere Application Server environment, you're likely out of luck getting that as a Platform, because it's too cumbersome to maintain such an environment. You're likely going to get the OS as a Platform and install you're own cluster of application servers on top of it.
The Cloud provider will provide you with separate, independent servers, or instances. All perfectly managed. This means that you'll need to ensure that the applications running on it support this. That they can deliver the availability requirements by themselves, not relying on the infrastructure.

What? I can't rely on the infrastructure for ensuring I meet my availability targets and RPO and RTO and stuff like that? That's correct. You cannot.
The separate, individual servers will be available and when not, they'll be available shortly. It's the managed part that will be made and kept available by the Cloud provider. The Cloud provider is not going to do whatever he can to keep the application available. Not at all.
So there's a strict divide between what the Cloud provider will take care of and what you'll have to take care of and there's little collaboration to be expected.
So if you need that business service to be available at all times so your customers can get access to critical information at all times, your application needs to be robust and resilient in a way that it does not rely in any way on the infrastructure it's running on, it cannot make any assumptions as to how the infrastructure works and whether or not there's anything specific running to keep the infrastructure resilient.
Well, that's a lie, there is something you can assume. Something you actually have to assume: The Infrastructure is a piece of crap and it will crash over and over and over and you have no control over it.

Why are IaaS and PaaS tricky for the corporates?

That's a good question, because they are actually very well suited to move to the cloud, the problem is in perception. The cloud is a technology, it's a means, it's a tool to solve a problem. It's a tool that has an impact on the organization of the corporate when employed. It also significantly changes the way a corporate will deal with it's IT resources. Control is achieved by means of contracts because the handson option is gone to the extent of the service provided to the corporate. So all of a sudden the legal department is the one that does Systems Operations, those men and women that have no clue what IT is, are required to draw up contracts and SLA's that are the equivalent of what the IT department used to do by means of actually doing stuff. This is for many corporates a huge paradigm shift, that requires a significant amount of maturity not only in its IT department but across the board. Something that is kind of not existent in traditional corporates.

Another issue is that from an (enterprise) architecture level, the Cloud is a strategy. It's something that should be used across the enterprise, but never put as dogma(*). It's not a generally applicable tool and it requires a very thorough understanding about what Cloud is, when it is to be applied and what the restrictions are. Interestingly it is quite different from for example stating that all systems should be on either Linux or Windows, or that all in-house developed applications are to be developed in C# using Visual Studio. Using the Cloud or however you want to call it, should never be considered an architecture principle. Because it isn't. This is very weird for large companies. I haven't seen many companies that understand this.

Something to take with you

One important aspect of IaaS and PaaS is that both are revolving around infrastructure, almost solely around infrastructure. But the decision to go for IaaS or PaaS or actually going cloud has nothing, well hardly nothing, to do with infrastructure, it is an application driven decision. There is absolutely no point in considering the cloud when your application is not ready. When it is not designed in a way that it can assume that the infrastructure is a piece of junk, when it is not designed in a way that it takes care of its own resilience instead of relying on the infrastructure you shouldn't consider the cloud.
The Cloud provider will not go beyond the service he provides, so he will not take care of your applications that are running in the cloud. He will simply just not do it. This is a good thing, because its not his area of expertise. In fact, application support is very labor intensive, it needs humans to do proper. And the Cloud provider is always looking for eliminating the human factor and do as much in an automated manner as possible. So the humans you would need for support are eliminated by the Cloud provider, in a non-felony way of course, in most cases.

Hope you enjoyed this post,


*: Dogma is typically for those that have no clue what they're doing and just do because it's dogma. As most people are happy if decisions are made by others, because it would mean they don't have to think and they don't have to assume responsibility, dogma as easily embraced.

July 1, 2013

Communicating while working - Message in a bottle... kind of

Hi fellow architects and other readers,

Over the coming period I will make an effort to post weekly an article on communication in the digital realm and its implication on architecture and the role of an architect. The articles will be diverse in terms of tone, topic and angle. This was my intention at the time I wrote the first article in a series on January 17th, 2013. This has been weeks, well months, ago and this is only the second installment of a multi-post item on communication. Shame on me.

Last time I have been discussing my experiences with online chatting from a personal level. It was a historic overview of me using various programs and media to digitally stay in touch with friends and family.

This time I'm taking it into the office and will discuss the the digital means of communication in the workspace. I talked about VoIP last time as for the consumer it was revolutionary, but in this post I will not touch upon it at all as VoIP in the office is a mere technology to have a phone system in place. Although digital, it is still the traditional way. (Leave your opinions in the comments because there's a lot to say about VoIP in the office not being just a traditional means of communication like the regular phone).

Ever since I was first connected to a computer network, there has been a form of email system. The digital equivalent of a regular letter. Back in the late eighties early nineties these were proprietary systems, confined to the network you were on. Although there was an open system implemented on UNIX and we used it in university.

Email has been around forever and it is the primary means of communication between people in the digital realm, well in most cases. Email is the predominant identification of a person on the Internet, it is reasonable to state that everybody with an Internet connection has at least one email address. Many, like myself have multiple addresses. Email is faster than postal mail so it is very convenient to most people. With the advent of broadband, emails have become richer in content as well, and the Internet email protocols have been adopted by all email systems and there are hardly any proprietary systems in use any more, but for very specific situations. Most of these are dealing with specific security circumstances. Email is not secure by any means, it is arguably significantly less secure than regular mail as the email can be opened and read by anybody with sender and receiver ever being aware of this. This is hard to accomplish with an "analog" letter as you would notice the envelope being opened when you receive the letter.

This security aspect of email has been a concern for many, especially companies that deal with confidential data are extremely aware of this.

Another important security issue is that it is very simple to send an email pretending to be somebody else.Since the email protocol works on clear text, anybody that can intercept an email can change the email before sending it along its way. And since the premise of the internet is that it is a highly redundant network that can withstand a nuclear attack, anybody can sit in between any two parties that send an email. But more importantly, the internet is a mesh network of point-to-point connections. It's a graph where every connected computer is a node. The connections are know because every node has an address, its IP. And these IP's are structured in a hierarchy.
On top of that, pretty much all connections are wired connections, because these are reasonably reliable and cheap as well. This also means that continents are connected by, literally, just a few wires. Put your computer on one of these wires and you're in the middle of all continental communications. Including email. Although very simplified, this is actually scary accurate.
So it's easy to pretend you're somebody else when sending an email. Just as simple as to write another name at the bottom of a letter. And the solution to prevent this is analogous to the analog letter; Signing the letter with a signature that is hard to fake. And here's another analogy with the analog world, how do you know what signature belongs to whom? In the physical world, this is handled by big books with names and signatures and when you get a letter that's signed, you open the book and compare signatures. And this is not a joke, this is how it's done. And in the digital world, we do the same. We have digital books (registries) with the names and the digital signatures that belong to these names and this is how we validate the authenticity of a signature. It's that simple... and really complicated. Because bits are only of the value 0 or 1 and therefore very easy to recreate. Do it in the right order and you can make a perfect copy of a signature. So we've got all kinds of mathematical schemes to ensure that it is as hard as possible to recreate the order in which the bits are written. And we distribute the signatures using a key infrastructure.
The tricky part of this is that you need to trust the person sending you his signature to be the person he claims to be based on the signature. Consequently this is not a solution to be used on a large scale where nobody knows anybody.

The key with email is, that its use and its validity is completely based on trust. But there is always plausible denial as an option for the "sender" when he inadvertently sends an email he never wanted to send in first place.

By the way, this previous part of this post is about half of what non-repudiation is all about. You just can't get non-repudiation without diverting to a small group of people you want to exchange emails with that should not be able to deny to have ever send an email.
The other half, denying you ever received and opened an email is the other half. This is like registered email (delivery receipt) with or without a signed receipt. Typically only enterprise grade products like Lotus Notes and Microsoft Exchange to name the two biggest email systems for the corporate have the option to automatically reply to the sender that the email was delivered to the recipient's inbox and again when the email was openen. Systems like Hotmail and GMail don't support this. So it's of little use to be honest in today's email eco-system.

But there's still a valid use of email in the enterprise. It's one of the most efficient ways to inform large groups of people about something that concerns them all. Because everybody is familiar with emails, it's adoption as a means to convey a message is massive. The analogy with mail helps of course.
The ability to add attachments to an email is of course also of huge benefit. One can send a large document or documents, which could be anything ranging from a text file to pictures or schematics or videos or music tracks, as an attachment where the email body is just an introduction to the real goodies in the attachment.

The problem with email is actually its wide adoption and its low threshold usability. Resulting in spam, unsolicited marketing garbage that clutters the corporate email inboxes with irrelevant emails that prevent people from doing their jobs. And then there are all the jokes and department party pics that keep people away from their work as well. Due to the little effort it takes to write an email to somebody, it's asynchronous nature, the improper use of any means to make an email more urgent, to raise its importance has caused over the last 5 years or so a transition of the enterprise from email based communications to something else.
Many enterprises are still searching for a good replacement. With a lack of alternatives to email that have all the good stuff and none of the bad stuff, email is still the prime means of collaboration in enterprises.

In the late nineties I was working at companies of all sizes where their email systems were limited to the extend of the enterprise and then there was the personal email. With systems like Compuserv and MSN (both were at that time a proprietary alternative to the web) one could send emails to other users outside your own organization. This changed when the internet-bubble started to grow around 1999. Hotmail was the big advocate for internet based email and with websites popping up like corn grains in a popcorn machine wanting to send you information, email grew rapidly and enterprises started to understand the importance of email to communicate with possible customers.
Interestingly enough, email turned into the most prominent and important way to communicate with the rest of the world along side with websites, but internally neither the intranet nor the corporate email systems took over this role from internal circulations, flyers handed at the door and the surprise brochures you stumbled upon in the morning when getting at your desk. For some reason we still don't see email as a viable means to communicate internal stuff and we still rely on the hard copy of the same message.
I noted myself that I am more likely to read a piece of paper left at my desk the night before than an email containing the same information left in my inbox around the same time. The reason behind this I don't know. The piece of paper is more intrusive, no question about it. It's typically placed on my keyboard so it prevents me from doing my work. The email in my inbox is easily ignored. It just sits there being unread. Maybe this is why I read the piece of paper, I have to pick it up and put it somewhere else before I can do my job. But still I can move it aside without reading it. So I guess it's more a matter of habit, a piece of paper is to be read. This is what I was brought up with. Books, papers, magazines, flyers, brochures, pamphlets. They are all pieces of papers with words on them, picked up by me to be read. I have to read it, it's the natural course of things. Email is not like that. I am more likely to think an email is too long to be read than a double sided printed memo about the same.

Based on this, I don't think that email will ever be as effective as paper. Not in the corporate, not to inform people. Yes you can use it very effectively to get your point across, but nothing more than your point. We, the working people, are not yet ready to use an all digital format to inform each other. We're still too analog. And when we're ready, email will not be that format. Why? Because it's too much like mail without an 'e'. What that other format will be? Intranet, document management systems, social media for the enterprise, chat programs? Well, I'll venture into those areas in the next installments of my blog, and I seriously will make it an effort to not wait this long again for my next post.

Untill my next post...


Find me on LinkedIn or Twitter