Translate

Showing posts with label IT. Show all posts
Showing posts with label IT. Show all posts

September 17, 2018

How Charming is Laziness?

Long time ago, while I was still a student at the PolyTechnic in Enschede, The Netherlands, I would justify my choice to study computer science, by proclaiming that laziness leads to efficiency. Which of course makes no sense I now know, because it leads to effectiveness. Penny-pinching leads to efficiency. You can read all about it in this post Perish or Survive, or being Efficient vs being Effective. But there's this little concept that results in more efficiency and laziness as well. It's therefore a charm. It is called automation and closely related to that third time.

Third Time Is Automated Principle

One of the key principles at Amazon AWS is that everything must be automated. It's not just that everything should be automatable, but it should be automated.
Whether or not it's an urban legend, but word has it that when you create something that requires manual action, you're out looking for a new job. For a company like Amazon AWS it is clear why automation is such a huge thing. And it is clear as well why they have such focus on API's. API's are how automation across the board is facilitated. But most companies are not Amazon or any of the other cloud providers. Most of you my dear readers are more likely to run your systems on Amazon AWS, Google Cloud or Microsoft Azure, then run those applications for your customers. Probably, your IT landscape is in size not even close to Amazon. Probably comparing to Amazon AWS as the Netherlands compares to the rest of the EU. In most if not all of the dimensions you can think of.
Arguably, the principle of "Automate Everything" doesn't apply to you. I'll leave it up to you to think of one or more arguments why automation is not something you should hold dearly.

Challenge to you: put in the comments a good reason why you think automation is not necessarily needed. I'll make an effort to counter your argumentation as a reply to your comment.
But read on first.

The benefits of automating processes are many. Irrespective of the kind of processes. An automated process is infinitely more likely to be repeatable than any manual process. This results in higher quality since errors will either be made consistently, and can be fixed, or will consistently not occur at all. How compelling is that?
Although the automated and manual version of a process might take the same amount of time to be executed, the automated process allows a person to work concurrently on something else that cannot be automated. And automated processes do not rely on the availability of a specialist to execute the process. So automation makes you and your organisation more scalable.

Not automating processes, even IT processes doesn't make sense. Still, when it comes to IT, we hardly do this. Why?

The situation at Amazon


I've come across many situations where things weren't automated. Worse yet, they could be automated. They were not automatable.

For me this was always an interesting fact to find out. For one, we're in IT and IT is all about automation. In fact, in Dutch we refer to IT as Automation (Automatisering - Dutch). The paradox is that we apply IT all over the enterprise to automate business processes, but when it comes to the IT processes themselves, automation is very likely the last thing on our minds. And when you think of it, that doesn't make sense at all. Walk into a room full of IT people, and just pose the statement that it is hard to understand why we're so good at automating business processes, yet we don't have automation in our own processes. And you'll see at least 90% of in-agreement-nodding heads, the remaining 10% are too flabbergasted with the realisation that this is a true statement. Same statement in a room full of non IT people and the first thing you find yourself doing is explaining why this is.

The fact that IT people are not automating their own processes is tough to explain, and I for one do not have such an explanation. There is an explanation though, for why Amazon AWS's processes are all automated. It's because one of the core principles by which they do IT is "Automate Everything". At a dinner party with Werner Vogels (Amazon's CTO) I was invited to, being the Chief Architect of a FinTech startup, I asked Werner (all attendees were told that we were on first-name basis), how it was possible for a huge company like Amazon AWS to live by these rigorous principles. Everything is an API, Automate Everything, and a few more. His reply was that there were two main reasons why it works. 
  1. Senior management all the way up to Jeff Bezos, were openly behind these principles. In fact many of them were mandated by Jeff Bezos himself. 
  2. Everybody in the organisation experienced for themselves the validity of these principles.
I asked him to clarify that second reason. According to Werner Vogels, Amazon is dedicating a lot if not all of its time to make its processes impacting customer experience as efficient as possible without impacting customer satisfaction. 'A satisfied customer is a returning customer'. The effects of changing a process is made visible to the whole company, at all times. So compliance to the principles will result in changes to the processes and the effects would be visible. Therefore, the validation of the principles would be continuous. And according to Werner, nothing is as motivating to change your processes than to see the effects of your efforts.

Rest of the dinner I was milling this over, enjoying the food and talk to the other guests.

The situation in the 'real world'

Unfortunately, most of us work at real enterprises. And those two reasons Werner Vogels had given me why IT process automation worked for Amazon aren't obvious in the situations I have found myself in.
For example, the amount of automation in a process within IT is not a metric that anybody is held accountable for anywhere I've worked or consulted. Neither are the benefits of automation part of somebody's accountability.
IT process cost reduction (IPCR) is as far as I know not a KPI within enterprises. Nor is the time-to-market (TTM). The latter often does find itself in another incarnation on reports and dashboards, namely as MTTR, the Mean Time To Resolution. When it comes to MTTR, we do see them on reports, but the MTTR is in hardly any case part of somebody's accountability. Same goes for time-to-market. Although it is always mentioned for any improvement project, it is hardly measured or reported on. It's a project issue in most cases, meaning that an organisation is doing projects instead of delivering products.
The lack of real metrics and if you will KPI's means that from an accountability perspective, there is not a clear person that has an incentive to push automation. And if you read my blogs on a regular basis, you know that I'm big on accountability.
Visibility of the effects of changes to the IT processes is another challenge for organisations. We often find ourselves in organisations that don't have a culture to measure our IT processes. This is especially true for organisations that have been around for decades. Since IT process automation is not pushed, there is no incentive to find bottlenecks in processes or pinpoint areas that are up for improvement. Resulting in the situation where the effect of improvements are not visible in most cases.

The lack of accountability and the rather big hurdle to be taken by IT departments in order to automate result in a situation in which we are just not automating our processes, because it gets no priority on our backlogs or funding in our PRINCE2 budgets. Process automation is collateral. And it's not a matter of not being as large a company as Amazon. It's a matter of not being aware of how automation affects the bottomline. And of not being held accountable for impacting the bottomline.

Third Time Principle

Three times is a charm, or at least should be automated.

In organisations I worked previously, the principle of automation as in play at Amazon was a little bit more pragmatic. One that has worked well for me, is the principle of "Everything done a third time, will be automated", reasoning behind this is that if you do the same thing a third time, it is extremely likely you will do it a 4th, 5th and even more times. The time required to create the automation will be saved by executing the task over and over again.

Time invested is never gained, but can only lead to savings later on.
The Third Time Principle is a compromise, although one might argue that it is a Troyan horse. By adopting this principle, the argument that it creates too much overhead for mundane actions is off the table. Only those actions that are performed repeatedly are automated. Built in justification for the investment needed.

The Third Time Principle is easy to adopt. Since it is a compromise, it can be applied when the push to automation is bottom-up. We often see that engineers see the need for automation since that is where automation will solve a problem and effects are noticed. To develop the automation is often a matter of getting the time to do so. Automation is now competing with other requirements for development time, for priority on the backlog. We all know where the priorities will be. Applying The Third Time Principle will remove this obstacle. Engineers can justify the need for automation and the Product Owner can justify the priority of the automation stories on the backlog.

Continuous Delivery

When striving for Continuous Delivery (CD) and more so for Continuous Deployment (also CD), there is no other way than to automate everything. CD requires a rigorous regime to move every manual task to the left in a process defined western style (i.e. left to right notation). Product development follows the DTAP model. Development is followed by testing is followed by accepting is followed by production → DTAP. In CD we hold true to the paradigm that everything manual is done in D, and TAP are fully automated. Any manual action in T, A and P needs to be automated and the scripts for this are developed in D, because otherwise it can't be done.
So when striving for Continuous Delivery, everything in the product development cycle is to be automated.

Accountability

When process automation metrics are part of somebody's accountability, that person will also be mandated to automate as much as possible. This is for the simple reason that you can't hold somebody accountable for something they cannot influence. Therefore, when process automation is measured through some metric for which somebody is accountable, it is that person's prerogative to drop The Third Time Principle and instead adopt the Automate Everything Principle.

Accountability is a matter of top-down enforcement. It is also a key aspect of culture change. When we want to adopt a culture in which we allow ourselves to reap all benefits of automation, especially our IT processes, we can't get around the fact that we need to revisit the metrics by which we hold ourselves accountable.

Scalability


One aspect that I haven't really touched upon is scalability, although I did mention it briefly. Automation is a key aspect of scalability, not so much scalability on a technical level but organisational scalability. Manual actions always need a person to execute them. The more complex the activities become, the more experienced or knowledgeable the person needs to be. And before you know it, it requires a specific person within the organisation. Because you rely on this person, that person will become the person with all required knowledge, it's a vicious circle. Think about it for a second, and I'm sure that you can think of a process or an activity in a process where you rely on a specific person and you know that person by name. In fact, if you want the activity to be done, you will even call that person because she's among the busiest persons in the company.

When you want to keep activities simple, you need to automate them. The more you automate, the easier it is to keep the activity simple. Hardly anything is more simpler than to have a command like 'do_complex_activity.py' provided that the automation is done using Python. Anybody can run this command provided they have the rights to do so, and when done really properly, anybody can do it at any time, because all the complexity of who can do what when is taken care of by the automation code as well.

Concluding

In the end, you'll agree that automation is what we need to apply to all our processes. At least when it's the third time you're doing the same job. It will improve quality and consistency. It also means that we as a company can scale our organisation without the need to increase headcount. It will be important to monitor the benefits of automation, without metrics it will remain a matter of good faith on the short term, and it will not justify the investments on a longer term. By understanding the benefits of automation and getting insights on where automation has most impact.


Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link of my blog to all your Whatsapp friends and everybody in your contact-list. But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.

Arc-E-Tect


The text very explicitly communicates my own personal views, experiences and practices. Any similarities with the views, experiences and practices of any of my previous or current clients, customers or employers are strictly coincidental. This post is therefore my own, and I am the sole author of it and am the sole copyright holder of it.

August 30, 2018

Why CDM resonates with Waterfallians and not so much with Agilisto's

Architecture often is a matter of perception.

Architects that consider that architecture to be a noun, often consider, for example, that a CDM (Canonical Data Model) is a solution to a problem. Architects that consider architecture to be a verb, are very likely not considering a CDM at all. And although I have very strong personal feelings against architectural artefacts like a CDM, I'll try to explain in this post, why they can be perceived as an addition to an IT landscape. I think that is also very much an issue with a historic touch.


The School of Waterfallians

When you're a Wterfallian, when you come from the Waterfall school, then architecture is part of the analysis and design phase. This phase ends with 'The Architecture'. And that "picture with the 1000 words compendium" will have to last for the rest of the project. So you have to design something, come up with an architecture that will remain unchanged for months if not years. Your data model is obviously part of your architecture and will be included. Soon you'll draw the conclusion that a durable architecture requires for CDM, because how else can you ensure that your design fits within the landscape of existing and future designs?
If you have been raised with the standards and values ​​of a waterfall organisation, then you also know that deviating from previous decisions is out of the question. That only causes delays and budget overruns. Waterfallic architects will often focus on edge-cases, because they have the experience that these are the reasons to double back on previously made decisions. Logical behaviour is therefore also that you want to have it, your list of edge-cases, as complete as possible before you get started. Waterfall is a self-reinforcing approach when it comes to architecture and culture.

The Agilista School of Architecture

If you come from the agile school of architecture, then architecture is part of the development phase. Architecture emerges. The architect is thus a developer, or better said, the developers create the architecture. The agile architect therefore only design the rules-of-engagement, they merely create the play-book. A comply-or-explain concept ensures that the architecture can emerge and the best architecture at that point of time can be defined by the developers. The best architecture within the current context emerges. So you do architecture (verb), and you do that by adhering to the rules. This is a continuous process, you have to play by the rules continuously. When we look at a CDM, we find that the CDM itself is not that exciting in an agile world, instead the rules that a (C) DM has to comply to are. They are the grammar of the language.
When you were raised with the norms and values ​​of an agile organisation, then you also know that sticking to earlier made decisions is out of the question. At least, if this is against better judgment. That only causes a huge waste of resources instead. The agile architects will focus on the most common cases, because they have the experience that these have to be realised first in order to create value, validate hypotheses and provide insight into the next steps to be taken. Logical behaviour is therefore also that you want to have the first use-case defined as soon as possible in order to get feedback. Agile is also a self-reinforcing approach when it comes to architecture and culture.

Culture and Values

The crux is in culture and therefore in standards and values ​​and therefore in accountability. When the performance of an architect is measured based on the number of times that decisions have to be reconsidered, then an ever decreasing number of this metric is, for a waterfallian, an indication that things are improving. For the agile architect the opposite applies and an ever increasing number of reconsiderations will be better... or not.
Because the agile architect does not double back to earlier decisions, as the agile architect tries to find the best decision for a situation every single time again.

Hmmmm, then how would you evaluate the performance of the agile architect? That question is in fact not that hard at all to answer. The agile architect is not about the architecture but about the rules. The more stable they are, the better. At the same time, the rules have to be sufficiently complete to be able to develop products that offer the organisation a bright future.

The Architecturally Challenged

Who has the more challenging task? The Agilista or the Waterfallian?

That can not be answered, because they can not coexist. But it does make it clear that both have a big challenge. Each in their own system.

The waterfallic architect is mainly concerned with analysing and specifying everything in advance. It is someone who is good at overseeing many concrete aspects. She is someone who knows a lot, someone who thinks in as-is and to-be. That is to be expected, because in a waterfall organisation the architect is often the person with the most experience with a product or product group. The focus is mainly on 'what' and 'how' in the 'why, what, how' system.

The agile architect is mainly concerned with understanding the dynamics of the organisation. She is someone who is good at working at the abstract level and thinks in concepts. It is someone who understands a lot, someone who thinks in from as-is towards to-be. That is to be expected, because in an agile organisation is often the architect who has the most experience in how the organisation has developed. The focus is mainly on 'why' and 'what' in the 'why, what, how' system.



Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link of my blog to all your Whatsapp friends and everybody in your contact-list. But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.

Arc-E-Tect


The text very explicitly communicates my own personal views, experiences and practices. Any similarities with the views, experiences and practices of any of my previous or current clients, customers or employers are strictly coincidental. This post is therefore my own, and I am the sole author of it and am the sole copyright holder of it.

July 24, 2018

A Treehouse for my youngest son

In 2015 we, my family and I, spend our summer vacation at the Mediterranean. Although this is irrelevant for this post, what is relevant though, is the fact that our youngest son, was watching Treehouse Masters on sat-TV.

This is a TV show where a group of professional Treehouse Builders (if there is such a thing) are building the most awe inspiring treehouse you can think of. And he was awe-inspired. You have to know, our son has had many great ideas about remodelling our home. One of these ideas was to make a hole in the floor of his bedroom and install a slide. This would allow him to be quicker downstairs when called for dinner, and he would have more time to play. We, my wife and I, try not to discourage him by telling him it can't be done. Instead we agree to most of his ideas, but he has to figure out how to realise them. Taking permits, structure integrity, budget, etc into account. We're willing to apply for anything needed in case a grown-up needs to sign a paper, but the tinkering is completely his responsibility. We always come out in a win-win situation. Either he finds out it can't be done, or we find out it can be done.

Anyway, he was going to build a treehouse in the fashion of the treehouses in that show, as soon as we would be back in the Netherlands.

You might be wondering what this has to do with agile architecture. I'll try to explain.

What happens in this TV show is that the team building that treehouse makes an amazing design and start building it. They do this within 60 minutes, the duration of the show. Of course in real life this will be longer, but typically they will have the treehouse done within a couple of days. One of the things they never show are the costs involved with building the treehouse. And with that, they also omit the costs involved with keeping the treehouse inhabitable. I'm using that word, inhabitable, consciously because those treehouses are really of the kind you could live in without 'of the Apes' being your lastname. My guess is that most of the treehouses (ever wondered why it is houses instead of hice? Since the plural of mouse is mice?) cost a couple of thousand €'s to build. And I wouldn't want to think about maintenance costs. Considering that most of the construction material is wood, you'll understand that there's a lot to be done to maintain it. To my son, although he's a really smart kid, it feels like they build the treehouse in minutes and for free. Not to speak about the running costs. He was 8 at the time. We forgave him his naivety at the time.

It's not a lot different from running a product in the Cloud actually (don't know why I always write cloud with a capital C, well almost always). When you take some time and watch for example Amazon's video's on AWS, you see how easy it is to use the many services of AWS. Amazon has done a great job in that. One of the things you are not seeing are the costs involved in building your SaaS products. Nothing about time nor money. Just like the Treehouse Masters never disclose anything about costs, structural integrity, maintenance etc. The tutorials nor the training exercises mention the AWS equivalents either. The tutorials and training exercises do mention different aspects around your architecture concerning reliability, stability, resilience and security. But it is all very generic.

Back home in the Netherlands, our son came up to me telling me that he was going to build a treehouse. He wanted me to film the endeavour, so I took out my GoPro Hero 4 Silver with the Gecko stand I got from a KickStarter project I backed. I asked him where the tree was where he was going to build the treehouse. It was going to be the apple tree in our garden. Only we quickly discovered that its branches would hold the treehouse. So the on-premise solution would fit, just not enough resources available, hardly scalable and other than being really close to home not suitable. Close to home was good, he would be at the dinner table quickly. Remember the slide earlier on in this post? Yup, not a lot of latency there, it was actually a solution you see a lot with on-premise solutions, you can bring resources really close to each other. I will not insult you by explaining the analogy, but feel free to explain it to show off your awesomness in the comments below.

Like pretty much everybody in the Netherlands, we all have bicycles so my son took his and was on his way looking for a suitable tree. After a while he came home, all excited, he found the perfect tree. I took the GoPro and we went to 'his' tree. It was a 25 minute ride. This was an issue and he realised this immediately. A roundtrip of 50 minutes would be very cumbersome for him, especially since his treehouse would become sort of a satellite of our home if it were up to him, it would use up about an hour of processing, uhm, playtime every day he was going to enjoy his very own treehouse. But he figured it would be worth it. He could use the extra exercise. Really, building and sort of living in your own house. Up in the trees? How cool is that? And by all means, it was an excellent tree. Perfect branches, not too high up and accessible without having to cross busy roads. It was everything a great cloud solution would offer to us, great infrastructure to build our solution onto, low entry costs and excellent connectivity. Yeah, that analogy still holds.

While we were at the tree, I took out the GoPro and our son started climbing in the tree. He loved it. And what do you know, there was a little higher up in the branches a sort of ladder. He could reach the higher branches by using it. Apparently he wasn't the only kid in town that knew about the tree and thought it was awesome. Yup, he was going to face some sharing of resources when using this tree. And that got him thinking. The tree was clearly big enough to have more than one kid playing in it. I mean it was an oak tree and virtually reaching all the way into the clouds. Well, in the fog it would be in the clouds. So that was fine. But what about his treehouse? How could he make sure that the other kids wouldn't get into his treehouse without him knowing about it and granting access? Yup, he needed to do some tinkering about that. Definitely. And he needed to figure out how to handle the construction of the treehouse as well. It would take some wood, plenty of nails and a hammer. A hammer we had, but the nails. Or the wood. And the expertise to really make it a save treehouse. Some tinkering was required.

Even though it seems so easy to just go into the cloud, create an Amazon account or Microsoft Azure or Google Cloud, enter your credit card details and you're good to go. It's really low entry. But once you really understand what your objectives are, you understand that you need to think about the fact that you share resources. You're connected to the internet so security really is something to wonder about and you will need some new tools and technologies in the cloud with the right expertise to benefit from what the cloud can offer. In a safe and cost effective way. Just like our son has parents that allow him to experiment, come up with new ideas, tinker with them and that help him with all the 'hard stuff' needed to first consider feasibility, then viability and eventually really build that treehouse, the Agile Architect should be available to you to help you with all the intricacies of the Cloud. Working together with you building the best solutions within the right context.

Concluding

That treehouse? Well we got some wood from the local DIY store and the proper nails. We went to that tree and created a platform. It was his Treehouse MVP. We used it to see how often he would actually go there to play, to see how many kids would (try to) use the platform and whether or not he would like to play with them. The MVP cost him €23, that's without a k and a full day of hammering about. Two weeks later he had a bright(er) idea. Instead of building a treehouse, he was going to build a sales-booth to be placed in front of our house and he was going to sell the apples from our tree in the garden. The treehouse he originally envisioned would've cost him about €1,700 in building materials, not to mention the security system he thought was needed to keep the other kids out of the house. He saved €1,677 by starting small. He made that fall about €30 selling our apples. Which were donated by me to the great cause of teaching my youngest kid to think big, start small, to learn and to adapt.




Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link of my blog to all your Whatsapp friends and everybody in your contact-list. But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.

Arc-E-Tect


The text very explicitly communicates my own personal views, experiences and practices. Any similarities with the views, experiences and practices of any of my previous or current clients, customers or employers are strictly coincidental. This post is therefore my own, and I am the sole author of it and am the sole copyright holder of it.

July 6, 2017

You know you're the Product Owner...

...when your product's users are complaining and you have to worry about your bonus.


Summarising

The Product Owner's bonus is on the line when users complain! You're not worried about the upcoming appraisals although the users are complaining? And you're not looking for that boat you fancied for so long because the users are giving you great reviews all the time? You're not a Product Owner.

In this day and age of DevOps and Agile, the most coveted job in the world seems to be the one of Product Owner. Never have I seen so many co-workers turn into a PO as recently is the case. Operators turning into Operations PO, testers turning into the Testing PO, security experts fulfilling the role of Security PO. It's amazing to see all these Product Owners mushrooming in organisations.
Understandably, because their original jobs are nearing their expiration date, or so they're let to believe.

The other day I had an interesting discussion with one of the architects of a client of mine. We're discussing a lot these days about architecture, API's, services, data warehouses and other interesting stuff. But this time around he challenged me. Seriously.
This particular client is a typical Project oriented organisation. Projects develop something and once it goes into production and becomes at times business critical, a very efficient department takes over. (Just in case you're wondering why I put efficient in italics, read this post). This architect is part of a department that is making the transition from a Project oriented towards a Product oriented way of working. It's a significant move and absolutely not trivial.
What's interesting is that the general understanding of the necessity of a mandated Product Owner has caught on with this client of mine. What hasn't caught on is that the PO is supposed to be somebody from the business. Take this with a tiny grain of salt thought, as by stating that the PO needs to be a business person, I mean that the PO needs to understands the business in which his products make a difference and generate value. Do you need to be an MBA? Nope, but you do need to understand the relevance of the product for your user.
And this is exactly the issue at hand. All the different so called PO's my friend the architect is dealing with do talk with the user, but do not necessarily understand the relevance of the products they're using. The Operations PO discusses the stability of the product, which makes perfect sense because the focus of an operations person is ensuring that the product is not crashing. One could argue that the relevance of an operations person is the fact that products will crash, which is a bit ironic. The Testing PO is of course focusing on whether or not the product is conforming to the requirements and specifications. This is what testing is all about: Is what has been build, delivering what was intended in the first place. And with all the security incidents, global incidents at that. And with all the new laws and regulations around privacy and what not, the role of the Security PO is cut out, it's focusing on limiting the risks for the organisation by having the products being used by, well the users.

Since all these PO's are doing their job extremely well, the products are up to par and are in fact creating value for the organisation. They surely do. But that is not to the credit of these PO's. The reason for this, is that none of the PO's are concerned with the best product, i.e. the product that is helping the user to conduct business. They are all focused on the product delivering what was intended, namely stability, requirements and security.
I hear your brains churning on this, so let's make an assumption here to illustrate: What if the product crashes all the time, but when it doesn't it removes the hassle of manual steps in a complex process? And although it crashes, data integrity is guaranteed? The operations person won't like it and will very likely take the product out of commission. Why? Because operations is affected when stability is an issue.
So now it turns out that the product is fully according to spec, tests are 100% green but the user will not stop complaining because the product is still not helping to drive business? The tester will not look into this as a testing issue, but as a specification issue. The fact that the tests didn't reveal this major flaw, namely usability. And even when it did, usability being a testable requirement is a novelty. It is with my client's organisation and it is in many others.
Guess you can fill in the problem with the Security officer acting as PO. Consider it a small exercise to flood your brain with some endorphin.

The issue here is that none of these so called PO's is accountable for the success (or failure) of the product. None of them is. And this is what sets the PO apart from everybody else in the organisation:

The PO's bonus is on the line when users complain!

This means that when a accountability of a product's success, i.e. the level if complaints from users about the product, is not with you, you're not the Product Owner. It also means that unless you get a full mandate to make a success out of a product, you're not the Product Owner either.
Don't accept the role of PO unless you get full mandate, which includes discretionary say about the product team, its road-map, funding, etc.

Back to our wannabe PO's, because that's the correct word for them. Their bonus is not on the line, they're not responsible for the product's success. They're definitely not accountable. But that doesn't mean that they're not responsible for making the product a success. Their knowledge, insights, experience and general professional view on the product is invaluable input to the PO to create a success out of a product. The PO shouldn't ask for their input, but when the input is not provided, the questions should be raised. They're not impacted by bad reviews, the PO is. They do have to worry about their jobs, because if the PO can't use them...


Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link of my blog to all your Whatsapp friends and everybody in your contact-list. But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.

Arc-E-Tect

June 8, 2017

‘The Continuous Delivery IT Team’ fallacy


Summarizing

Considering Continuous Delivery something for your IT department is throwing your money out the window when doing an Agile Transformation program. If you want to do so, make sure you throw it into my direction. IT is a business concern, Continuous Delivery is a business concern.

Over the past couple of months I did a series of awareness sessions on Agile, Continuous Delivery and DevOps at a large client of mine. As is rather common, also at this organization the initiative to move towards Agile ways of working, Continuous Delivery and a longing for DevOps is with the software developers. This makes perfect sense when you think of it, because it's the developers that want or rather need to make changes and the pace by which they have to deliver changes is only increasing. But I guess I don't need to tell you this.
But Continuous Delivery (or Deployment for that matter) is only possible when you don't consider it a software development thing. There really is no point in spending any time to move to Continuous Delivery when you are not planning to do it broadly. I'll get to that in a second.
During these awareness sessions, which I do with colleagues from the same team, we discuss with non-developers like risk officers, sys admins, project managers, architects, test consultants, etc. we outline what Continuous Delivery and DevOps mean within the context of my client's organization. It's a common story and I won't bore you with the details, but obviously we touch upon the benefits of small increments, feedback loops and so on. And the fact of the matter is that our story actually does sound like it's the perfect thing. And we consistently get the question: "So, really, it can't be all awesomeness, so what's the catch?". And in the rare occasions we don't get this question, we still answer it.

The biggest problem with Continuous Delivery and everything following from it, is that it is not a software development thing, or even an IT thing. When you think so and still go down that road, spare yourself the frustration and disappointment, drop me an email asking for my IBAN number and transfer the budget for your transformation project into my account. At least one person will be happy with you spending that money.
And yes, even when you think it's an IT thing, you're better of giving me that money. This is what I call:

'The Continuous Delivery IT Team' fallacy

Let me explain. First of all, let's make sure you understand what I consider Continuous Delivery. It's the process that produces a product that results in business value in order to be able to sustain the organization up to the point that it can be delivered to a user and it will be delivered to a user as soon as possible. It's not a perfect definition or even a formal definition, much because there's so much more to it. But what's important is that it defines work on a product as being done, when it is ready to be delivered to a user. The actual delivery is an explicit manual step which is decided by the Product Owner to be taken as where the rest of the process is preferably fully automated. This as opposed to Continuous Deployment, where the delivery to the user is also automated and hence triggered by the developer when committing his code to the source code repository. Again, this is a definition close enough to what it is and fitting the purpose of this post.

With Continuous Delivery and agile working in general you want to receive feedback for what you've been doing as soon as possible. And preferably you want this feedback to be such that for one you know that you've done well and secondly you're actually contributed to the bottom line of the organization you work for. This is why we like to work at the granularity of user stories and epics and delivery on a per user story or at least on a per epic the changes to a user.
As you now understand, for every single user story or at least epic, you need to do everything that needs to be done for a release, because you're delivering to a user. Somebody is going to actually use that little piece of software you've worked on with such a passion. Fact may be that the complexity is limited because increments are small, still you need to release new or changed software. And there's the catch.
With software delivery, or product delivery in general, it's not just the product development team that's involved, or more specifically the software development team. It's other teams and people involved as well. Think about marketing, legal and compliance, worker's associations, security and risk management. These are all teams that are not part of the IT department. And no, security officers are not part of an IT department, and in case they are, they most definitely shouldn't be. And the biggest catch of all, the 'business' needs to be involved from day -1. Unless all of these different roles, teams, people, stakeholders, however you want to call them are on board and work in those same small increments, not becoming a bottleneck and automate as much as possible, your Continuous Delivery efforts are a waste of everybody's time and your organization's money.
Back to your 'business', it's them that request for features and not the user. It's them that pay for the development of the product not using it per se. When that 'business' is not capable or willing to define the product's features such that they can be delivered in tiny chunks, than you're out of luck and not much will come from Continuous Delivery in your organization.

The Product Owner is key in all of this. Being the hinge between the Product Team and the rest of the world, the PO is the single one person that can and must ensure that the product is delivered incrementally, with business value visibly added with each increment. PO can't do this, you've got yourself some trouble. The PO, at all times, must be able to relate every single feature, one way or another, to an improved life for the user of your product and therefore a positive effect on the organization's bottom line.

So, Continuous Delivery is not an IT thing, it's a business thing. And don't let anybody convince you otherwise. Being as it is, moving towards an Agile way of working and Continuous Delivery or Deployment, in fact would mean that you no longer consider your IT as being delivered by a separate IT department, but as an integral part of your business.
This does make perfect sense, considering your IT as part of your business I mean. It makes perfect sense because more and more business are all about information and are run based on information. IT is no longer a tool, like a glorified typewriter, it is in fact what is producing business value. No IT no business.

Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link of my blog to all your Whatsapp friends and everybody in your contactlist. But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.

Arc-E-Tect

March 15, 2017

"We develop our frontends in Angular" is not an Architecture Principle

Did you ever hear that story about that architect who was telling the developers how to develop their solution? Badabing, badaboom!!!

Yeah, I know it’s a bad joke, there’s no laughter there. I better not aspire a job as a standup comedian. Thing is though that there’re still plenty architects out there that are not only defining architectures, but also tell the developers how to develop that architecture. It amazes me why this is the case.

I know that not that long ago I told you to ditch your architects when they are busy creating roadmaps. But actually you shouldn’t because there’s a real legitimate reason for defining roadmaps. The reason being that they’re an excellent tool. So ditch the architect when roadmaps are what they deliver, because those are the ones you want to get rid of.

Now there’s another breed of architects out there that I have a hard time to understand. I really think that architects should be architecting. That’s what their roles are, just like developers should develop. People should be doing what they’re supposed to do, and preferably spend some time outside their boxes while thinking about what their job really is.
For an architect, the sole raison d’etre is to have answers when asked questions and at the same time be enigmatic. Well, maybe that last part should translate into being helpful to whoever is asking the questions to get the answers themselves.

The architect, as I see it, is the one that defines the boundaries within which the developers can act freely while being confident that there’s no harm to the overall integrity of the enterprise or the organization for that matter. The strength of the architect, the real power, is that this can be done by defining Architecture Principles. These are extremely powerful because you can link them directly to your organisation’s mission, vision and strategy. Well if you can’t than you should reconsider your principles. But if you can, you’ve got your business justification for your principles and you won’t need to define the business value of those principles either. But just like any power-tool, they’re not trivial and to formulate the right principles you really need to be ready to spend some serious time. Furthermore, you need to limit the amount or principles to a maximum of 10, just like the 10 commandments. More will be hard to remember by heart and live by, which is what you should expect from your developers. Less is better, but beware of the fact that you should cover all the important bases in your environment.
There’s another tool that’s really powerful, which is the reference architecture. Which in fact is a model, one of the good kind as you can read here. A reference architecture is a model of an application that meets all the architecture principles. But beware; It’s a model, not an architecture! It’s a visualization of what the architect had in mind when he defined the set of architecture principles. And more importantly, it’s one of many possible models.
So as the architect defining the reference architecture you should be aware that everything in it, can, should and will be taken with a grain of salt. And as the developer you should understand that the reference architecture is just a tool for your architect to illustrate what was meant by the architecture principles, and definitely should not be taken literally.

Ah, so what about the joke, the bad one?

Well, there’s this group of architects that don’t understand the difference between architectures and models and instead of defining architecture principles and reference architectures, they tell the developers what and how to develop such that they comply with the principles.
There’re some serious problems here. For one, the architect omits the definition of architecture principles for whatever reason. Likely because it’s too hard, or they don’t see the benefit. Shame on them! Then there’s the case where the architect ignores the fact that the developers are very likely more proficient in developing than the architect. Oh, and let’s not forget about the fact that architecture principles are almost timeless, or at least stick around quite a bit longer than the typical technology, tool or framework. Yet, this architect dismisses this simple fact of IT and at the same time doesn’t keep the construction manual, because that’s what it is, in line with the current state of IT affairs.

For example, I discussed a couple of years ago, where ‘discussed’ is a euphemism for ‘verbal fighting’ with some fellow architects in some kind of architecture board that it made no sense to dictate that a multi-tier architecture should always be deployed such that each tier should be on separate infrastructure. Instead the application should be complying to the fact that data access logic not to be mixed with business logic nor with presentation software. Sometime ago I had a discussion, which was a real discussion, about whether or not we should define in reference architecture that either Eclipse or Visual Studio should be used for coding. Uhm, I think not, thank you very much. And yes, this was a real discussion.
There’s the discussion about what part of an application is developed in what programming language? We had the discussion at a startup I was Chief Architect at. Guess what, the developers made those decisions, I just stated the principles that I wanted them to comply with.

So what it actually boils down to is that architects need to govern, and that should take up most of their time. They should safeguard the integrity of an organisation’s IT landscape and make sure that in all cases the applications or rather the products in that landscape can comply with the architecture principles. The architect should promote and facilitate an unambiguous way of working among teams. The architect should also be concerned about an ubiquitous vocabulary within the organization when it comes to business and that it is translated by IT consistently.
The architect should not dictate that a particular programming language to be used, that a particular infrastructure is to be deployed to or that specific tooling is to be used. And in those cases where there’s a good reason for the architect to do this anyway, than it should be implied instead of explicitly dictated. In case all development is to be done in Java, the architect should ensure that only Java developers to be hired. In case everything should be hosted on Amazon AWS, the architect will have to pave the way for the teams to use Amazon AWS, making it the favorable option for the teams of the many hosting alternatives they can choose from.

As you might have guessed here, is that the architect I’m referring at is the enterprise architect or domain architect. Definitely not the application architect or the solution architect. The latter are both very much there to device a solution for a problem. They’re more like designers than architects in that sense. Which brings me to another little piece of personal amazement. Why are there so many large organisations that have domain architects on enterprise level whose domain has nothing to do with the business, but are strictly technical? Probably this is some remnant from the time that we considered IT to be a cost center and centralization of IT was the holy grail for many a CIO. Nowadays where we see IT as a product, something that is not just setting us apart from the competition but an actual product, there’s hardly any room for the these architects, at least not on an enterprise level. They should be working closely with products teams from the same product portfolio. Become Product Portfolio Architects, safeguarding the integrity of the IT landscape of the portfolio. Is it a demotion? Definitely not, it just means that they’re relevant again.

To paint a little context here, I’ve been that architect I’m complaining about. Emphasis on ‘been’. I enjoyed the technology too much to let it go, but as the domain architect I wasn’t allowed to work on the software, so all I had was telling the developers what to type. Initially I got away with it, because I was at least one of those architects that actually could still code. But really soon thereafter it became apparent that I hardly had enough knowledge or experience to really do the work. To me that was a real eye opener and I had to good fortune I was working with people that didn't hide the truth from me. They expertly conveyed to me that yes I was more than a decent architect, but as a programmer I really wasn't up to it anymore. And as much as that hurt at the time, it allowed me to focus on architecting. I still code, because I still think I need to be able to understand and to know that things might work. But as an architect, especially in an agile world I find myself more and more facilitating, communicating and coaching. Working 3 sprints ahead of the developers, not because I'm that a fast thinker, but because being an architect I need to be able to set directions. Let the team worry about how to get to our destination, be their own satnav system, and me the architect am more and more the person controlling the system's settings. Controlling whether or not to allow ferries, toll roads, to prefer back roads, go for the scenic route or the fastest or most economic route.

Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link of my blog to all your Whatsapp friends and everybody in your contactlist.But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.

Arc-E-Tect

March 9, 2017

Principles of the API-First paradigm

API's are weird puppies. They've been around since forever and whole battles have been fought in courts to gain access to API's. This was in the old days when having a platform with an API for 3rd parties to develop for your platform meant that you had a lot of leverage over those parties. Not providing early access to them meant that they would be late to the party and all the sweets where already shared between the early guests. Having an API meant having power. Not providing access to your API resulted in lawsuits because you were a monopolist, using your position as the platform owner to bully your competition into obliteration.

Nowadays the situation is completely different. Having an API means having a future. Nowadays to survive as a business you need to have a platform, which in turn means you need to have an API. Unless you have an API, one that suits the needs of 3rd parties and that is easy to use, you're doomed. You'll be just an Application Vendor, and toast before you can say "I wish I had built a platform". An excellent book on the topic is "The Age of the Platform" by Phil Simon.

As you can read in my previous post, API's are the hardest thing to develop. It always helps to have some guidance when you're up for a difficult task, and often principles help you getting this guidance. There're quite a few other guidelines out there on the interweb that are more than helpful. Below are 5 principles of the API-First paradigm. Followed by 5 derived principles and at the end of the post, I'm listing a couple of more than interesting resources on API design.

One final note before I go into the principles. An API is not much more than an interface into the depths of your platform. It discloses your platform and makes it possible for others to leverage it for their needs. It is therefore critical to have comprehensive documentation for your API. Now comprehensive doesn't mean that you need a big fat document describing everything you can think of that might be remotely related to your API. Typically you will need to document the technical interface of the API, i.e. the call signature of the API, including the input and output parameters of the API.
In addition you will want to document it's behavior exhaustively, i.e. only develop the behavior your documentation mentions. It makes perfect sense to use BDD (Behavior Driven Development, see my post on the topic) for this purpose. By doing so, your users know exactly what to expect from the API and it allows them to create stubs that behave the same as your API without the need to have access to your API's implementation. Finally it helps to have a simple example on how to use the API without making any assumptions as to why one would use the API. This could be through showing the relevant cURL statements.

Enjoy your reading…

Principles of the API-First paradigm

The API is the product.

It is owned by a product owner, and the responsibility of a product team. A team is responsible for it, from cradle to grave. This team is responsible for managing and ensuring its proper operation. Don't consider an API as part of another product, something that's a by-product, because it means that you won't treat it with the proper dignity and soon you'll find yourself alienating from your users.

The API makes no assumptions about its Consumers.

It is robust and resilient enough to thwart of any consumer with bad intentions as well as ignorant ones that have no clue about what they're doing. All this without compromising the service level to all well intending consumers. Mind that in most cases you're working on an API while making assumption as to who and why they'll use your API, only to find yourself in a situation where your users are all but the ones you envisioned, and all using your API for completely different reasons than why the API was created in the first place.

The API is suiting the needs of the Consumer

The API is developed only when there is at least one consumer and it is tailored to suit this consumer, by this the API is creating business value through its consumer. Don't be so arrogant to develop something that's supposed to be an API without having at least one consumer lined up to actually use it and tell you how great it is. Apart from the fact that working on something that is not being used is a complete waste of time, it also means that you're not really understanding the fact that you should not make any assumptions about the API's consumers. No assumptions at all.

The API is built to last

An API is never breaking compatibility with its consumers, although it may rely on the paradigm of 'Ignore what is unknown' to allow for extending the result. API's, other than the humans that build them, never break a contract. Period! This is to say that they will respect contracts, will have addendums to the existing contract, or in those cases when the API, which is after all a product, is no longer adding value, they will void the contract, but obviously respecting the contractual notice period. On a side note, it is fair to assume that its consumers abide by the convention that they'll ignore what they don't understand or know, hence the fact that you can add addendums to existing contracts without actually breaking the contract.

The API is idempotent.

At all times, in any situation, always. It doesn't matter what you do, how you do it or when you do it, when you call an API it always behaves exactly the same. Meaning that its result is always the same and therefore perfectly predictable. The order in which API's are called doesn't matter, as long as you put in the same values, you will get the same results.

Architecture principles for the API

The API is clear in its error reporting

Errors are either caused by the API implementation, the API consumer or something unexpected. It makes perfect sense to stick with the conventions of HTTP errors, why invent something new, when the existing is fitting your purposes perfectly. But whatever you do, make a clear distinction between these three causes for errors.

The API is always managed.

An API can be managed independently from;
Technical level, i.e. it can be accessed securely and consistently throughout its lifespan -> Scalability, Availability, etc

Application level, i.e. consumers can use the API based on a published interface that is behaving per this interface -> Interfaces, authentication/authorization, etc.

Functional level, i.e. users of the consumer can be provided access to the API based on the agreements made -> Throttling, metering, monitoring and reporting, etc.

The API can be implemented independently of its interface;

They are truly decoupled. Multiple implementations can co-exist and determining which implementation to choose for each individual call can be context driven. Mind that the interface should be technology agnostic, or at least as much as possible, and in no way should the interface disclose anything about the implementation other than that it should explain how it behaves.

The API is technology agnostic.

The interface can be called by a consumer of any technology without any assumptions of the technology used in the API implementation, and vice versa. This is pretty much an extension of the previous principle, but important enough to mention explicitly.

API's are like dogfood.

The API is used by the team responsible for it in the same use cases as the consumers for which the API was intended; "Eat your own dogfood" paradigm. Always make sure that in case you need functionality that one of your API's is providing, you're calling that API yourself and making yourself dependent on the API. Only this way you understand your API, and more importantly you'll be very cautious about how you treat your API and therefore how you treat its users. After all, you'll be one of them.

Interesting Reading




Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link of my blog to all your Whatsapp friends and everybody in your contactlist.But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.

Arc-E-Tect

March 1, 2017

The Arc-E-Tect's Predictions for 2017 - Agile and WaterfalI [10/10]

The Arc-E-Tect's Prediction on Agile and Waterfall

It's 2017, meaning 2016 is a done deal, and most of my predictions for 2016 I made about a year ago and never got around documenting in any form have proven to be absolutely bogus. Which leads me to come up with my predictions for 2017 and properly document them. So continue your effort reading this post and hopefully you'll enjoy reading them. Unfortunately we'll have to wait for about a year to find out to what extent I got it all right, but for now, ...Agile.

Why Agile? Because in 2017 we finally realise, well that big upfront design and waiting for everything to be done before release is really a dumb idea that no one with a shard of a mind can think to make sense. And all the big fat waterfall projects that were worked upon are done about now.

Agile in, Waterfall uhm... also in

Well, agile is finally in and is going to replace waterfall projects in those organisations where there is an active movement towards agile. Which nowadays are the majority of enterprises. These organisations are heavily invested in dropping the traditional practices and adopting new, more business value oriented practices. It has taken a while because these organisations also had large waterfall projects that, practically, had already progressed towards a situation where migrating towards agile was just not viable. Now that these legacy projects are close to be completely done, we see agile picking up massive speed.
But then there are still quite a few organisations that are vested into waterfall. This could be because of practical reasons, for legislative reasons that still require big releases. Or because these organisations have only learned how to talk the talk, but never went as far as to learn how to walk the walk. And this is unfortunate but still reality of the day.
In 2017 we will still see massive projects with Prince2 cycles, large upfront designs and maybe execution in sprints, but nothing like doing releases as soon as something is releasable.

So in 2017, agile will be truly in, across the board. But having said that, waterfall will still be in as well.


Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link to my blog to all your Whatsapp friends and everybody in your contactlist. But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.

Arc-E-Tect