Translate

Showing posts with label web-service. Show all posts
Showing posts with label web-service. Show all posts

January 14, 2018

The Arc-E-Tect predictions for 2017 - In hindsight [2/2]



Last year, like every year, I did some predictions on what would be in and what would be out in 2017. But unlike other years, last year I actually posted those predictions on the internet.
Before I start with my predictions for 2018, I will go back to my predictions for 2017 and see how things turned out.
This is part two, and part one you can find here.

#6: KVI in, KPI out

"Forget about performance. Performance, in the end, means nothing when it comes to an organisation’s bottomline. What matters is value. However you want to cut it, unless value is created, it’s not worth the effort. And by value being created I mean that the difference between cost and benefit increases."

First prediction that I am looking at in this post is a bust. Although more and more teams and organisations are transforming into agile adopters, the value driven aspects of agility is still undervalued by most. I hardly come across organisations, departments or even just teams where success is measured in terms of realised value. Vanity metrics are pretty much still the norm. It's a shame because it also means that the promise of applying agile concepts are still a long way from being realised.

#7: Products in, Projects out

"It shouldn't surprise you, but I'm not a big proponent of projects and instead love to see it when organisations switch to a product focused approach. But in 2017 it will turn out that I'm not the only one."

This is happening big time in a lot of environments I've been in. The main reason why organisations transition from a project perspective towards a product perspective is because of CI/CD (Continuous Integration/Continuous Delivery). With reduced cycle times as a result of automation of the software delivery process, it is almost impossible to not release a product early and keep on working on it. Hence, delivery to production does not result in the end of a team.
My main concern in these situations is the lack of a Product Owner who has mandate over scope. The Project Manager typically does not have that mandate. It is the next step.

#8: Heterogeneous in, Homogeneous out

"In 2017 we’ll truly face the uprising of new and more technologies, concepts, architectures, models, etc. And in order to be able to manage this we will finally understand that we need to embrace the fact that our environments consist of a multitude of everything. In many smaller organisations that are at the forefront of technology and that are working in agile environment it is a given, but now that large organisations have also set out to adopt the ‘Spotify’ concept and thus teams have a huge amount of autonomy, polyglot is key."

Yes! Most organisations have dropped their need for huge standardisation efforts. Instead I see that architecture principles are becoming highly popular. With that and the gradual move towards autonomous teams I do see a shift in mindset where homogeneous environments is no longer considered the answer to all IT problems. This is also a mindset shift from efficient towards effective.

#9: Activities in, Roles out

"The thing is, we’re moving, as an industry, in the direction where we want be able to get feedback as early in the process as possible, which means that every person concerned with creating and delivering a products will be involved in everything needed to create that product and ensure that it works as intended and more importantly as needed. In this setup, everybody is what we in 2016 called a full-stack developer."

In 2017 this didn't happen. The T-Shaped employee and the Full-stack developer are found in small organisations. Large enterprises still have a culture based on decades of functional hierarchies. Contracts are still based on roles where T-Shaped and Full-stack have yet to find their spot. Unless agile transformations are no longer considered to be merely an IT and even just a software development thing, it will become very hard to get into cultures where teams are considered to be the atomic entity in product development and instead of roles and responsibilities, tasked are performed as activities.

#10: Agile in, Waterfall uhm... also in

"Well, agile is finally in and is going to replace waterfall projects in those organisations where there is an active movement towards agile. Which nowadays are the majority of enterprises. These organisations are heavily invested in dropping the traditional practices and adopting new, more business value oriented practices."

In 2017 I saw more and more organisations realising that the typical waterfall projects can actually be done in agile ways. This notion is actually causing the existence of waterfall to be questioned. Do we still need waterfall? No, not at all. But we still need large projects. In 2017 I saw a realisation by many managers as well as architects that large project and waterfall are not different words for the same behemoth, instead there is no a clear tendency to actually do large projects by applying agile practices and waterfall seems to be relegated to only tiny projects. Ironic, but pretty awesome.

This was part two of a two part on a quick glance on my predictions of 2017. Yesterday, I have posted part one of the series and see about how the first 5 predictions turned out. Next week will be about my predictions for 2018.



I hope you enjoyed this post. Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link of my blog to all your Whatsapp friends and everybody in your contact-list. But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.

Arc-E-Tect

January 11, 2018

The Arc-E-Tect predictions for 2017 - In hindsight [1/2]





Last year, like every year, I did some predictions on what would be in and what would be out in 2017. But unlike other years, last year I actually posted those predictions on the internet.
Before I start with my predictions for 2018, I will go back to my predictions for 2017 and see how things turned out.

#1: Microservice in, SOA out

"In 2017 people will start looking at Microservices as something that is useful and way better to have in your architecture than services. So a Microservices Architecture will replace Service Oriented Architectures in 2017."

With a massive transition towards agile practices and organisations embracing scaled agile frameworks, it has been inevitable, the Microservice Architecture (MSA) has been broadly embraced. Or has it?
In 2017 I've seen that those organisations that require true agile concepts in practice in order to be(come) sustainable also embraced MSA as the architecture of choice. The change in mindset that is required for MSA to thrive in an IT landscape and an organisation itself for that matter turns out to be more encompassing than mostly thought. I've seen it fail in those organisations that merely do agile, and succeed in those situations that are agile. Yes, MSA and Agile are going hand in hand.

#2: API's in, Webservices out

"Okay, in 2017 we'll feel ashamed when we talk about web-services and SOA. Instead we'll talk about API's. This is closely related to my first prediction on Microservices, which you can read here."

Here I can be short: There's hardly any talk about web-services anymore. It's all about API's nowadays and that has been the case for the better part of 2017. Over the course of 2017 the notion of API's also shifted from merely glorified web-services towards true business services.

#3: Application Architecture in, Application Model out

"Yes, in 2016 I've been confronted with application models. Again and again I have been slapped with models of applications and yes, I've been on the other end of the slapin' stick as well. Shoving application models into other people's faces. Stuffing it down their throats, making them, no forcing them to understand."

Unfortunately this prediction didn't come true at all. Although it depends on how you look at it. In 2017 I've been in more discussions than before about Application Architectures, although in most cases people were actually talking about models. I guess the terminology is out of vogue, but a lot of architects still have a hard time to use the correct terminology. Still, to me the Application Model isn't out and the Application Architecture isn't in. Just yet. Probably with a more widespread adoption of MSA, we're bound to ditch the model and embrace the architecture.

#4: Internet in, Intranet out

"So the internet will be in, and no longer will we consider the intranet as the context in which our software is running. Talk with any cyber security firm and they will tell you that security has become a real issue since computers got connected. Networks are the root of all evil when it comes to viruses and the likes. The larger the networks, the bigger the problems. And with heterogeneity the number of threats only grew, probably exponentially."

This so turned out to be a correct prediction, and like I envisioned, one of the main drivers has been security. And the lack of it, in many cases.
In most environments I've been working in and with over the course of 2017 there was a real notion that no longer was it affordable to not consider security on an application level and assume that applications could be accessed from the internet. Even when that wasn't supposed to happen. Finally we know that assuming the network to be secure is an assumption that really does make an ass of u and me (assume -> ass-u-me)
The good if not best aspect of this is a security-by-design mindset in most if not all people involved in product development.

#5: DevOps in, Scrum out

"I can be very short about this. Business has finally come to understand that IT is not something that enables them to deliver new products to their customers but instead IT is what they deliver to their customers. IT has become a product, and therefore an immediate business concern."

In 2017 it turned out to be not that short, unfortunately. What I've seen happening is that unless agility is a true business concern, a matter of business sustainability, DevOps is not something organisations want to embrace. Although this is primarily a matter of large enterprises, those with seemingly enough money in the bank to linger a while longer before feeling the need of being able to wart of the threads of start-ups and their agility.

This was part one of a two part on a quick glance on my predictions of 2017. In the next couple of days, possibly tomorrow, I will post part two of the series and see about how the remaining 5 predictions turned out. Next week will be about my predictions for 2018.


I hope you enjoyed this post. Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link of my blog to all your Whatsapp friends and everybody in your contact-list. But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.

Arc-E-Tect

February 22, 2017

API Management in Azure using Microsoft's solution - Resources and Templates [2/2]

This is a series of posts regarding the topic of API's and API Management on Microsoft Azure. Not all posts in this series are directly related to Microsoft Azure and their implementation of API Management, that is intentional. The series also explains about API's, about creating API's and about what it construes to in order to manage them, conceptually and in reality. The reason for this series is that over the past 12 months I've come across many articles on the web, have been in many discussions and advised various clients of mine on this topic. Sometimes discussing with fellow architects, other times with vendors, still other discussions where with developers and managers. But the overall situation has always been that none of the people at the other side of the table had a full grasp of what developing API's in an organisation means, what it entails to manage them or what should be worried about when deciding to go API. I hope you enjoy reading these articles, and when you feel like it, comment on the articles. I always take serious comments serious and will reply to my best effort.

This post is the last post in of a series on API Management using Microsoft's solution.

API Management is not the same as an API Manager. In short, API Management is what you need to do with your API's, and an API Manager is something with which you can do this. There are a lot of different solutions that allow you to implement API Management, one of these is Microsoft's solution on Azure. The tricky part is that when you start reading their documentation on API Management you'll read little about how to actually implement API Management in your organization using their solution. It's all very conceptual. This shouldn't be a problem since the concept behind API Management are more important than the actual implementation… until you want to actually implement it.
Read the previous posts on the topic to learn more about the concept, continue reading this post to learn more about how to use Microsoft's solution in Azure, their cloud platform.

Resources and Templates


Finally, resources and templates, the bricks and mortar of the cloud. In the cloud you're typically dealing with resources. An infinite amount of resources, or at least that's how it feels. Everything in the cloud is, or should be a resource. Some are very basic like computing power, storage and networking. Others are more comprehensive like database, firewalls and message queues. And then there are a quite a few that are truly complex and very useful on a high level, for example directory services.

The cloud, being what it is, is like a normal infrastructure on which you host something that generates value for your business. Hence everything you need to run business applications in a traditional hosting environment, you also need in a cloud environment. Obviously there are significant differences between traditional hosting platforms and the cloud, but when you don't take too close a look, you're likely not to see these differences.
So in the cloud you also need to define systems, attach storage, put a firewall in front, put connectivity in place etc. You can do this by hand every time you need an application's infrastructure. Typically through a portal by clicking your way around and assemble the infrastructure. But more sophisticated and a way better practice, is to define the infrastructure in a text file, typically JSON for most cloud platforms, and use the cloud vendor's tooling to create the infrastructure based on this file. As such, the file becomes a template for a specific infrastructure setup you need. By providing a parameter-file you can externalize specifics of the infrastructure. For example the URL's to locate a web-service can be defined in this parameter-file to distinguish between an infrastructure intended for testing and the same infrastructure intended for production runs.

The particular template is called a resource template, it defines which resources are needed and how are they specified in order to run a business application.

One of these resources that you can use is an API manager, just like you can specify databases and virtual machines as resources. And here's your challenge.

The challenge is in that an API Manager consists of three parts;
  1. Developer portal, used by developers to find your API's and their documentation.
  2. Publisher portal, used by API developers and the likes to manage the API's.
  3. Gateway, used by applications developed by those mentioned above in 1 to access API's managed by those as mentioned above in 2.
Each of these have their own context and is used by a different group of 'users'. The real interesting part of the API Manager is the API Gateway as it is the component that exposes the API's you've been developing. This is your product. It is the resource that is part of your software. And the thing is; it can be shared or limited in scope to just the product you're developing.
Ideally you would have one gateway per product, because the gateway and particularly the API's it exposes, are part of your product and as your product evolves, the API's that come with it will evolve as well. And of course you would want a consistent life cycle across all components that relate to your product. Since the API gateway is just like any other resource in Azure, the above is perfectly doable. In fact, it is possible to include the API gateway as part of your product's resource template and when you provision the relevant infrastructure and deploy your product on it, the API gateway is provisioned as well.
Pretty awesome, when you're willing to forget that the costs of an API gateway are pretty steep. We're talking about close to €2,5k / month. There's not really a price based on usage. Microsoft is really weird in that when it comes to pricing in the cloud. That whole pay-per-use is not really everywhere in their pricing schemes. I like Amazon better in that regard.

So an API gateway per product is not really an option in most cases I would argue. Instead, I would advise to have a gateway per product suite. In case you have teams that handle multiple products, scope the gateway to such teams, or otherwise scope the gateways to department. Use it as a rule of thumb though and not as the law.

The point here is that you want to be able to have your API's evolve with your products and that you want teams to be as independent of each other as possible. But in addition you want your API's to be operated independent of each other. And this is important. In Azure, API's don't scale, it's the gateway that scales. And you want to be able to be smart about that. Especially when it comes to policies and usage tracking or rather generating value from API's being used. When a team is responsible for the success of its products and therefore the value that is being generated, it becomes obvious that that team would want to be able in control of what is affecting their success.

The alternative would be to work with an SRE approach, where you have a team that's responsible for your platform, including your cloud platform(s). This team would then realize the API gateway for the other teams as a service. The catch here is that this platform team decides where your API's are 'hosted', or rather whether or not you share the API gateway between teams or not. Unless your platform team is really committed and more importantly has a thorough understanding of what API's really are and I mean really understand this, I would advice against this approach. I would oppose it for the sole reason that your API's are the window into the soul of your organization. When the API is not performing well, your organization is not performing well. And especially when you're going API first, and thus build a platform, you're screwed without proper API management.

In case you do decide to go the platform team route, make sure that your processes are completely automated. Including the deployment of new API's as well as new versions of existing API's. My preposition here is that you'll be working agile as can be, deploy to production as soon as you're confident that your software is up to it. Meaning that new software needs most likely new (versions of) API's. Don't make the platform team a bottleneck, so make sure that you're working with them to deploy the changes API's consistently, repeatable and consistently. Better to abide by their rules then put your own in place. Drop the whole platform team approach when they're not providing a 100% automated deployment process for your API's.

Then there's the portals. The developer portal is a tricky one because it provides access to your API's from a developer perspective. You should be really nervous when you're nervous about potential unwanted developers nosing into your API registry. Because it means your security is way, way, way below par. Remember, API's are different from regular services in that they are build such that they make no assumptions as to who accesses them. And unless you've build them that way, you'll be in for some really serious security challenges. That said, there's no reason why not to have different portals for developers within your organisation and developers from outside your organisation. And have API's exposed only to internal teams and publicly exposed API's. Just make sure that this is an exposure aspect and not, I repeat, not an API implementation aspect.
So develop API's as if they're possible accessed by just anybody and their mother. Expose API's to a subset of this group.

Then there's the operational aspect of your API captured in the publisher portal. Here you should take an approach that only the team responsible for an API should have access to the API from a management perspective in an operational environment. In other words, access to an API's policies is for the team that 'owns' the API only. You'll need to take care of that. period.

Mind that Microsoft is rapidly changing their API Management service on Azure. Most likely. as I type this, they're making life easier for you on using the service in your organization. The concepts as I've described still hold though. And as Microsoft will hopefully come to realize that also for API Management a pay-per-use model is the way to go, you'll be able to treat API Management as part of your product instead of as part of your platform.

This concludes my series on API Management using Microsoft's solution. I hope you did enjoy it.

The complete series:


Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link to my blog to all your Whatsapp friends and everybody in your contactlist.But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.


Arc-E-Tect

February 3, 2017

Let me spell out why Bimodal IT doesn't work

Summary

In short, in a Bimodal IT environment, both modes need to move at the same pace in order to optimise productivity of the organisation. Value is created by products in the hand of the user, not by parts on a shelf or piles of almost ready products. This means that unless all relevant information is exposed by the Mode 1 systems, the Mode 2 systems will adjust their pace.

There's no excuse to use Bimodal IT as an excuse

Bimodal IT is an excuse for those that just don't want to accept the fact that we live in a world where there's no one size that fits all and that's also a world where we need to live together instead of separately. So thinking that Bimodal IT might work is totally ignoring the fact that the old and the new together result in synergy, that the short term and the long term go hand in hand and that classic IT models like the ancient Change/Run division can co-exist with brand new DevOps way of working. We have to realise that in IT only in the binary system it's black and white, everywhere else in IT there're more than 50 shades of grey.

If you're a wondering about my view on Gartner's Bimodal IT model, you should read my post on it. It's available here.

Why this post? Because in the last week, although I've stayed home for a day and a half caught by the flu, I had several discussions with colleagues about Bimodal IT and unfortunately, they considered in an opportunity. Not an opportunity to bridge the gap between systems that have a short life cycle and those that have a long life cycle. Or rather, short and long delivery cycles. And to be completely precise it's about the time it takes to think of a new feature and deliver it to a user. One mode is considering long cycles, the other is considering short cycles. Or rather, one is assuming many features to be released over time the other just a few. Think about a core accounting system and its front end. Accounting doesn't change that much over time, possibly some compliance reporting, front ends change all the time. Browser changes, mobile devices, etc.

Back to my discussions. The most symptomatic was about "Bimodal IT is used as an excuse not to comply with the ancient and really seriously out dated processes in IT that are (to be) followed within this organization and I assume unwillingness to try to modernise the processes. By just simply referring to Gartner and their definition of Bimodal IT and stating that this is the exact situation at their organisation, some of my clients architects and project managers are implying that the processes only apply to what's running in the back-end and what they're working on, which is the front-end." [from my previous post]

Understanding the frustration

It's the frustration about ancient processes and the requirement to comply with them that is causing this excuse to rear its ugly head. Fine, I can see this and since the organisation doesn't want to look into this and adapt its processes to become a little bit more 21st century, I can only agree to look for every possible reason not to stick with the old ways.

But here's the catch. Well, let's wait for a second here. I had this other discussion on the same topic. Now the discussion was about that Mode 1 systems need to be stable because they're the core systems, forming the backbone of the organisation. Keeping in mind the adagium "Never change a working system" they should be kept stable at all times.
Unfortunately, these systems exist in an ever changing world, so maybe they don't change, their context changes. And because they are so important, after all they're the backbone of the organisation, when troubles arise, they need to be fixed as soon as possible. High speed and high quality. In addition, you want to address issues as soon as they arise instead of queueing them up and apply in large batches.
It's the general misconception of the dinosaur; When you need quality, you spend a lot of time testing. I'll blog on that some other time, for now suffice to state that time and quality are hardly related.

The catch

Than there was the catch. The catch being that Mode 2 systems are those systems that are changed very frequently. Possibly because a first iteration is brought to the user as soon as possible and new functionality added regularly. Possibly because a new idea is tested and dropped when failing or productised when successful. What most of these systems have in common is that they rely on Mode 1 systems, since that's where the organisation's biggest asset resides: Information. You don't want to change an organisation's information model too much, especially not when the information has been build up in years. Information from these systems are exposed, to be used by other systems, typically Mode 2 systems.
Mode 2 systems are new, by definition. They're implementations of never-thought-of-ideas, and therefore it is very likely that the Mode 1 systems do not (yet) expose the relevant information. Hence with the advent of the Mode 2 systems, the Mode 1 systems all of a sudden need to be changed, regularly, often, quickly, consistently. All the time maintaining a high level of quality.
There is no chance that Mode 2 people will allow the Mode 1 people to move at their own speed because that means they'll be moving just as slow. 'It's the slowest boyscout in the line' case.

In short, in a Bimodal IT environment, both modes need to move at the same pace in order to optimise productivity of the organisation. Value is created by products in the hand of the user, not by parts on a shelf or piles of almost ready products. This means that unless all relevant information is exposed by the Mode 1 systems, the Mode 2 systems will adjust their pace.

Decoupling not to the rescue this time

Mind that decoupling techniques like API's, ESB's, etc are not solving this problem. Interfaces are owned by the systems that implement them. No matter what method is used to define these interfaces. So thinking that there's an ESB, or an API manager or other decoupling technology will prevent the Mode 2 people to move as slow as the Mode 1 people is foolish.
Also, introducing a Canonical Data Model or some other data defined insulation layer will not help you either. In fact, that might, or rather will, introduce more complexity so will slow things down even more.
And let's be honest; Why would we need to insulate ourselves from working together. Understanding each other's contexts and limitations? Agile and DevOps is all about breaking down silos. Understanding that collaboration gets us further. And in the end we need to deliver products in the hands of the user. Having said that, there's no point in not changing archaic processes and apply some LEAN on them.


Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link to my blog to all your Whatsapp friends and everybody in your contactlist.
But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.

Arc-E-Tect

API Management in Azure using Microsoft's solution - Resources and Templates [1/2]

This is a series of posts regarding the topic of API's and API Management on Microsoft Azure. Not all posts in this series are directly related to Microsoft Azure and their implementation of API Management, that is intentional. The series also explains about API's, about creating API's and about what it construes to in order to manage them, conceptually and in reality. The reason for this series is that over the past 12 months I've come across many articles on the web, have been in many discussions and advised various clients of mine on this topic. Sometimes discussing with fellow architects, other times with vendors, still other discussions where with developers and managers. But the overall situation has always been that none of the people at the other side of the table had a full grasp of what developing API's in an organisation means, what it entails to manage them or what should be worried about when deciding to go API. I hope you enjoy reading these articles, and when you feel like it, comment on the articles. I always take serious comments serious and will reply to my best effort.

Applications

Applications are pieces of software that when combined they perform a business function. Typically, applications perform various business functions, but ideally each application provides access to a single business function. If you think of it this way, most of the software we traditionally know and see in our IT landscapes, what we call applications, actually has multiple applications. You can use this software in a variety of ways in a variety of situations to perform a variety of business functions. So the name 'Application' is incorrect but understandable.
As always, the cause of this misnomer can be found in history. Deploying software used to be a complex thing in the past as was software development and even more so developing software that communicates with other software. A ton of different concepts, protocols and technologies have emerged over time to facilitate the connectivity between different pieces of software. (Skip the next section when you're not interested in a short but incomplete overview of software interoperability)

Software Interoperability

Microsoft released (D)COM in the past followed by ActiveX. The rest of the world joined into the fray with the incompatible CORBA standard(s), there was RMI for the Java world and RPC for the C(++) world. Then we had other solutions based on message exchange in the form of MOM (Message Oriented Middleware) which was, more, protocol independent. Although MOM as a concept was awesome, the big vendors succeeded in ensuring that their products were not interoperable. And because most of the vendors realised their customers didn't really appreciate this, the ESB was invented. Again, it was there to make sure that pieces of software could interoperate and fix the complexity of our monoliths. ESB's, as we all know, are a big waste of time and money and by far deliver on their promises. I did write a post on the topic, which you can read by clicking here and I'll write a little update on the topic soon as many ESB aficionado's don't see this yet.
So after we started omitting ESB's from our IT landscapes but kept the ESB concept around, we finally found ourselves in a world that actually delivered on promises and was low cost; The Internet.

Web-services

You can skip this part if you already know everything about web-services and are not interested in what I have to say about them. It's relevant, but not such that you can't skip over this section.

So web-services, both SOAP and REST-based, are interesting little beasts in our IT landscape. Web-services are the first real useful concept when realising an SOA. A Service Oriented Architecture. Based on Internet technology and therefore implicitly complying with all the requirements of an SOA.
And then there's the fact that web-services are accessed through a simple interface, the URL. So even in case you don't think about it, you're forced to limit the scope of what a web-service can do because you're limited to a URL. And although behind many URL's there can be a single monolithic structure, or you can use all kinds of different ways of encoding many functions within one URL, there really is no point in doing so, because working with URL's makes it harder to do things the wrong way than to do it the right way.
And the cool part is that this is true for even high level business functions that require a variety of (technical) low level web-services. On every level, working with web-services (almost) requires you to develop software that do only a single (business) function. As such, a web-service can be considered an application.

Web-services make it harder to do it wrong than to do it right.

If you look at it this way, a web-service being an application, you can see that a web-service is a small piece of software that provides just enough functionality for another piece of software or a person, to consider useful.
In effect; Every web-service, by means of its interface exposed as a URL, is exposing a (business) function. And as soon as you aggregate various web-services into a new web-service, meaning you call several web-services in a particular order to implement a high level business function, your new web-service complies to the same rules.

When we look at API's, they're the interface that we put in front of some software to expose specific business functionality. Look at the previous post on the topic by clicking here.

API's are products. There's an intended full stop because in many ways people tend not to think that way. But really, an API is where the boundary of a system is. The rest of the world accesses that software system through the API. The rest of the world might be a user interface.
So if you think about API's being products, or possibly a set of API's being a product, you're on the right track, because you then understand that a web-service is not a product in itself, instead it is a piece in a product's puzzle.
Why this is important? Because the API developer has the responsibility towards the consumers of the API, ensuring that changes to the API do not impact existing consumers of the API and lure new consumers at the same time. A developer of a web-service does not have this responsibility. or rather there's a way out.

Remember that in a previous post I mentioned that an API does not make any assumptions about it's consumers. They're not known in advance and any assumed consumer is likely not the biggest fan. Web-services on the other hand do know their consumers. Or at least, they can be developed such that assumptions are made regarding the consumer. This is also the reason why there's a need for API management in some cases and no need at all in other cases.




Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link to my blog to all your Whatsapp friends and everybody in your contactlist.But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.


Arc-E-Tect

January 16, 2017

The Arc-E-Tect's Predictions for 2017 - API's and Webservices [2/10]

The Arc-E-Tect's Prediction on API's


It's 2017, meaning 2016 is a done deal, and most of my predictions for 2016 (I predicted them about a year ago, but never got around documenting them in any form) have proven to be absolutely bogus. Which leads me to come up with my predictions for 2017 and properly document them. So continue your effort reading this post and hopefully you'll enjoy reading them. Unfortunately we'll have to wait for about a year to find out to what extent I got it all right, but for now, ... API's!

Why API's? Well, API's are all the rage and everybody and their mother is working on platforms, and as we all know; Without API's there is no platform.

API's in, Webservices out

Okay, in 2017 we'll feel ashamed when we talk about web-services and SOA. Instead we'll talk about API's. This is closely related to my first prediction on Microservices, which you can read here.

API's are basically another word for webservice to many people. So there's not much different here. Just as with the Microservices, we'll see mentioning of API's more and more where in the past we talked about webservices. Until people are referring to platforms, something that is really picking traction. Still I'm not talking about platforms, but about API's instead.

The reason for this is their strong relationship with Microservices. Delivery of a platform is a strategic decision that defines the direction in which an organisation is thinking about products. API's, which expose the functionality of a platform, are products. You can read all about this in my series on API Management on Azure, which you can find here. Other than web-services, which are pieces of functionality and/or data exposed via a well defined interface, API's are always targeted at an external consumer of the service. In other words, an API never knows who's calling nor does it make any assumptions about who's calling. Web-services on the other hand might very well be limited to a known set of consumers and make assumptions about their consumers.

The promise of web-services, predominantly a decoupling between functionalities in an IT landscape, is limited to those case where the web-service, or rather it's interface is treated as an API. API's, almost by definition, are bound to deliver on the promise of decoupling. When developed properly, taken care of and fronting independent pieces of software, API's are the closest thing to silver bullets in software we've come across so far. And since we love silver bullets, API's are in, web-services turned out to be just plain old regular bullets, so they're out.


Thanks once again for reading my blog. Please don't be reluctant to Tweet about it, put a link on Facebook or recommend this blog to your network on LinkedIn. Heck, send the link to my blog to all your Whatsapp friends and everybody in your contactlist.But if you really want to show your appreciation, drop a comment with your opinion on the topic, your experiences or anything else that is relevant.

Arc-E-Tect

September 3, 2016

Architecture Principle - Service Interface Compatibility Principle

Architecture principles are ground rules by which we develop our software. This ranges from ways of working to styles of coding, from software decomposition to business compliance, from infrastructure choices to security implementations. As you can see, it covers pretty much everything related to the development of business value, online, offline and internal.

Description

The typical SOA application landscape is defined by services. Through service orchestration the behavior of (online) services is implemented, business value is created. A service consists of two parts;
  1. The implementation of the service, 
  2. The interface of the service. 
Both components have their own respective life-cycle. Resulting in a situation where through time the implementation of a service is evolving typically to support evolving non-functional requirements. For example to evolve from a relational database to a NoSQL database. Or from home-grown development to the use of an online service. Analogous, the service interface can evolve over time as well, typically because the resource it represents evolves over time. For example the amount of data captured about a customer can increase over time from just a name and number to also include an email address, date of birth and marital status.

Service implementations typically change because of (more) technically driven reasons, will service interfaces typically change because of (more) business driven reasons.
Where as the service implementation is more of a service internal discussion, i.e. how the service is implemented is irrelevant as long as the defined behavior is provided, the service interface is a service external discussion, i.e. how the service can be consumed is relevant to whomever uses the service. Thus, the service interface must be kept as stable as possible, i.e. when the service changes, the interface should stay relevant for existing users and become relevant for new users.
Service interfaces can change for three reasons, that have to be factored in when complying to this principle:
  1. The resource represented by the service becomes richer, i.e. fields are added. In this case there is no impact to existing consumers. Provided that the the consumer abides by the principle that 'Unknown makes Ignored', in other words, the consumer will ignore any fields that it wasn't aware off. For example, with a Person resource that contains a Name and a Phone number, the new version also holds and email address.
  2. The resource represented by the service syntactically changes, i.e. existing fields in the resource are for example renamed because of an enrichment of the resource data. For example, with a Person resource that contains a Name and a Phone number, the new version renames the Phone number to Business phone and adds a new field Mobile number.
  3. The semantics of a resource changes although the data doesn't necessarily changes. For example a Person resource becomes a Law Enforcement Agent as all captured information pertains agents in law enforcement.



Keep in mind that the service interface is a contract, it is not just the technical procedure/function call or the URI to access the service, but also the description of the input and output values. Their constraints, limitations etc. Hence the reference to 'behavior' in the text above. For more on behavior see my post Oh Behave! How JIRA turned into a Veggie Salad.

Rationale

Since a service doesn't know about its consumers, the Producers make no Assumptions about Consumers Principle, it is unknown what the consequences are of changing the interface of the service. And therefore it is impossible to quantify the impact of such a change, hence the viability of that change is extremely hard to define. So the interface can not change, or rather, changes to the interface may not impos an impact on any of the consumers.

Implication

The implication of this principle relates to both the service as well as the consumer of the service. In all cases, the old interface should stay available for those consumers that rely on the old interface.

Producer implication

Service interfaces are to limit the impact of change, meaning that syntactic changes to the resource are to be implemented by extending the resource, i.e. adding fields, instead of changing the existing resource, i.e. changing fields. This caters for changes 1 and 2 as described above. Service interfaces that require a semantic change, i.e. the change defined in 3 (above), in fact result a new resource, and hence the identified of the resource should change.

Consumer implication

Consumers must abide the principle of 'Unknown means ignored', hence any information in a resource that is unknown, should be ignored.

References

Your API versioning is wrong, which is why I decided to do it 3 different wrong ways
VERSIONING REST WEB SERVICES
Versioning RESTful Services

July 12, 2016

API Versioning, another religious war

Summary: When you ask your best friend, e.g. Google, for an answer as to what is really the best way to handle different versions of the same interface, you'll find out that there is no straight answer. Actually it is a religious war and many of the  arguments given for either of the various ways of versioning interfaces are actually just opinions on aesthetics. Below you'll read my reasoning for what my views are on the various options you'll have to version your service interfaces. Probably you'll find other ways of skinning this cat as well.


One of the core aspects that we need to decide upon when working on our web-services is decide on how we will handle different versions of the services.
There are two dimensions to this:
  1. Production vs. Non-Production (We're not doing DTAP but PorN) See the relevant principle here: Everything is Production or Not Principle
  2. Current version of the web service and the next version of the web service.
Although these dimensions seem to be different, in fact we're talking about one and the same challenge we'll have to overcome.
Basically we have a version of a service in production, and one version in development (in case we need to change the service). Once that development version reaches production, we have two versions in production according to this principle Service Interface Compatibility Principle. As long as no customer can access the version that is in development, this version is not in production, according to principle RBAC is everywhere, every service can only be used by a user with the right role. Somebody not being a developer cannot use a service that is in development. This is an important notion because it removes the necessity of a separate infrastructure to separate production from non-production, although it still might be a good idea to have some separation.
As a consequence, we can deploy a service and limit access to the service through Role Based Access Control and since development of a service always results in a new version of that service, we can see the solution to dimension 2 as a solution for dimension 1, essentially allowing us to see both to be the same.
Looking at service versioning, we have to distinguish between service interface and service implementation. Since both have their own life-cycle, we can treat them separately. In addition, when we talk about service versioning we are mainly concerned about the impact of a new version of a service onto the consumer of that service, so we're concerned about the service interface and not so much with the service implementation. So within the scope of this post, we're discussing the version of the interface and not its implementation.
Furthermore, when we talk about services, we talk about RESTful services. And finally, we talk about the version of the published interface which is different from the version of the source code implementing that interface or the service attached to it.
When the interface of a service changes, we either deal with a change that breaks the consumer, or one that doesn't break the consumer. At all times we strive for the latter, since not breaking the consumer means no impact on the consumer. Changes that extend the resource, are changes that have no impact, since consumers of RESTful services are to ignore what they don't know. So any new fields in the resource representation will be ignored. Consumers that ignore these fields will not be able to benefit from the change. These are syntactical changes we want, and versioning has negligible impact on the consumer, the consumer does not need to be aware of the change.
Changes to the syntax of the resource representation we don't want but are inevitable, are those that change existing fields. For example they change the type of a field from an integer to a float. In this case we need to make sure that the consumer is not confronted with the change without it changing as well. Hence either the service should no longer be accessible, resulting in an error at the consumer, limiting damage. Or by being backwards compatible and thus returning the old representation to that consumer.
As a third change, we have semantic changes, which are changes to the actual resources. For example a generic resource turns out to have become more specific and we start limiting representations to just those specific resources. For example we have the concept of Customer which is a Company with which we have a Contract, and then we also have Customers that are not companies but Consumers, although still Customers, they're not the same. As the semantics of the resource has changed, we can't talk about the resource in relation to the changing interface as if it were the same resource. The actual resource has changed, so has its representation and therefore it impacts the consumer, which will break. In this case, again, we need to make the old service to become unavailable or we keep on supporting it.
There are in essence three major philosophies when it comes to API or service versioning:
  1. Include a version in the URI that identifies the resource, e.g. https://www.foobar.com/api/<version>/<resource>
  2. Include a version in the URI that identifies the resource, using a request parameter, e.g. https://www.foobar.com/api/<resource>?version=<version>
  3. Include a version in the Accept-header of the request, thus using http Content Negotiation, Accept: application/vnd.<resource>.<version>+json
Of these three cases of changing interfaces, the last one is the easiest to address. Since the change results in a new resource type, the identifier of the resources, the URI, must be changed. This results in a URI in accordance with: http://www.foobar.com/api/<version>/<resource>. Although the location of the version indicator is arbitrary, the best practice is to have it as close to the domain as possible. Typically right before or right after the resource. The philosophy here is that we're talking about a new resource and not a new version of an existing resource. Hence the version indicator in front of the resources. Alternatively, the version is omitted and a new resource name is used. This is preferred when the resource should have a new name. In the example above the URI could change from http://www.foobar.com/api/customer to http://www.foobar.com/api/customer/consumer and http://www.foobar.com/api/customer/company.
Note that it remains a challenge to decide what the old URI will return.
The first and second change types are actually similar and should indicate a new version of an existing resource. When we encode the version in the URI, it makes most sense to do it after the resource indicator; http://www.foobar.com/api/<resource>/<version>. The problem here becomes immediately clear as it very much resembles the version in the previous scenario. Keeping these the same will be confusing. Alternatively, this would be a good candidate for the Accept-header option, keeping the URI the same and only use the accept-header for content negotiation. The resource stays exactly the same, so the identifier of the resource as well. The version of it's representation changes though, i.e. the indicator in the request depicting what format is accepted by the consumer, is explicit. Accept: application/vnd.<resource>.<version>+json.
There is a rather major drawback with this approach in that it becomes rather complicated to quickly test the solution, since from a browser, the http-header is almost never open for manipulation. So this solution doesn't allow for usage from the browser address-bar. This is typically solved by supporting both versioning strategies simultaneously. Which allows for two major advantages, besides support for programmable consumers and browsers;
  1. The URI approach for versioning will return the relevant documentation of the API for that particular version, if that version exists or an http error when it doesn't.
  2. The baseline URI can remain and stay usable. Access to the resource is through http://www.foobar.com/api/<resource> and the acceptable version by convention when none is specified in the header is the first version of the resource, or alternatively, the oldest supported version in case versions are at some point dropped.
By following the above, the intentions of services and therefore API's remain.

When you ask your best friend, e.g. Google, for an answer as to what is really the best way to handle different versions of the same interface, you'll find out that there is no straight answer. Actually it is a religious war and many of the  arguments given for either of the various ways of versioning interfaces are actually just opinions on aesthetics.
Above you're reading my reasoning for what my views are on the various options you'll have to version your service interfaces. Probably you'll find other ways of skinning this cat as well.
My advice to you is to closely look at your specific situation, what you want from your interfaces, the need for (backwards) compatibility and then choose wisely.

July 4, 2016

Why Uber is so applicable in the agile delivery of software

After a short week abroad and various discussions with some excellent developers over there, it is clear to me; Uber is not just very convenient when you want to go from A to B in Cairo, but it is also extremely applicable when discussing the way we develop web-services.
Let me explain to you how Uber got into being. In fact, how Uber Black got into being. It all happened in New York City, a buzzing city where it pays not to have a car and have somebody else drive you. Reason being that parking in NYC is rather problematic if not costly. So when you walk around in the Big Apple, you're bound to see those yellow cabs, the iconic taxis in New York City. And while you're at it, you'll see those large limousines, Lincoln Towncars mainly. These are black, and these are very convenient, luxurious means of getting around in New York. The problem with these limo's is that you have to call them, they're not allowed to respond to the cab-hailing crowd on the sidewalks. The crowd is for the yellow cabs, the customers of the limo's are at home. You have to call them. They're somewhere out there and you have to wait for them to arrive. And that's where the problems start since neither driver nor drivee knows where the other person is. So when it takes too long for the limo to arrive, you're bound to become impatient and call another one, or just walk out the door and hail a yellow cab. They're in abundance. So eventually the limo driver shows up at your doorstep with you already gone. A situation that's not cool for the driver and so next time he hits a traffic jam, he'll just ignore you and wait for another ride, even though you're all patience and waiting for the limo to arrive... soon. It's a bad experience for everybody.
Uber solved this by providing both customer and driver with an app that shows exactly where the other person is. So you know where the heck the driver is and the driver knows exactly where he has to pick up his customer. Furthermore, as everywhere in the world, customers in New York don't trust the cab-meters so there's always a dispute about the cost of a ride. Another problem that was solved by Uber as they want you to state where you want to go and by means of the GPS of your phone they know where you are. So you get a quote of the price and that's it. No more disputes about your fare and since you've provided your credit card, no need for change. Oh, and those weird, rude and above all smelly drivers are getting a bad rating, so when a badly rated driver is responding to your request for a car, you just cancel the ride and try again.
Image result for lincoln towncarSo now let me explain the way Uber works, from a consumer perspective, for those that are not (yet) familiar with the disruptive nature of Uber; First of all, you need to have an account with Uber, that holds your name and your payment method, as well as your phone number. So basically Uber wants to be able to know you, wants to know how you'll be paying for the services and how to contact you. Once you've got your account all setup, you can use their service. Even in cities without limo's you can use Uber, it's called UberX and it's an alternative to other taxi services.
When you need a car, you typically start the App on your phone and it recognizes where you are, based on your GPS location and it asks you where you want to go. Then you can see the cost of the fare and when you're okay with this, you request a car. This is where the fun starts, because a driver accepts the ride and you can see who it is. And you can see his rating. You like it, you ride it.
As soon as you get to your destination, all is done. No need to pay, because you're already set up to pay by credit card. And you're asked to rate the driver and maybe put some explanation on why you rate him the way you rated him.
Basically this is exactly how we develop or should web-services. I'm sure I don't have to explain this uncanny analogy. But I will, while I'm at it. Why stop?
When we develop web-services, we know where we are and where we want to go next. Actually, it's so much like traveling by any means other than driving yourself. We always need to know where we want to go. It is the first thing you do when you take a bus, a train or a taxi. You decide where you want to end up. Since it's pointless to just start driving, moving around and around and only stop when you've reached you're destination. Fun when you're backpacking, but very inefficient when you're not.
So basically, when we develop web-services, we start within finishing as we start with defining what we want. These are the requirements. Mind that the more specific the requirement, the more likely we get what we need. So next we check what it's going to cost to get there. Determining the cost of creating the service will allow us to decide whether it's worth it. We already know how to pay for it, typically it's in terms of time. Then, if all checks out, we request an implementation. There's a lot of different ways to develop a service and we have to decide whether we go for a quick online solution, a more solid standard product, or a custom made solution. It's basically all about the rating of the driver, nothing more. Then we develop. Once we get to our acceptance testing we verify whether the destination is exactly where we wanted to be in the first place. The more stars the closer to what we wanted.
So let's put a little agility in to the mix. The Uber driver is using a navigation tool on his phone to determine where he needs to go and when he strays from the route, the navigation tool will make sure to get him back on track. But the awesome part here is that the navigation is turn by turn, in other words, only at specific points on the route, it will provide input on how to adjust the course. This is what we do as well during the development of products. We have defined points in time to determine whether or not we are on the right track. And just as the customer can tell the Uber driver that she wants to go somewhere else, we can adjust our destination as well. Mind that the customer of the Uber driver can't tell him verbally, but instead has to provide a new destination in the Uber app. Reason for this, is that both the customer and the driver know exactly what the new destination is by using the exact same terminology. And the Uber driver is adjusting his route to get her there. In effect, the customer will tell the driver under what circumstances she's happy to accept the end of the ride, i.e. she's driven to her (adjusted) destination. We do the same, we change our to be developed product, by redefining the requirement and accept the product when the adjusted requirements are met.
Now folding the Uber analogy and the Agile software delivery into a single, stronger, strain of software engineering DNA, we have to look into some additional little tidbits.
For one, to get software delivery in an agile way really to work, you need to make sure that you deliver the features in a way that they are useful, which means that it is best not to focus on functionality but on behavior. This allows everybody to consider the software developed to be regarded as an actor in the business of the company. The difference is that you not only have the basic functionality of the Uber driver to get you at your destination, but you can also define how to get you there by means of non-functionals. So how fast to get you there, whether or not to stay on the highways as much as possible, etc. When describing behavior, the non-functionals are an integral part of the behavior instead of yet another set of requirements, typically those that are left out when discussing the system and only brought in at the end, right after we find out the system is not performing.
Just like we have, often untold, additional requirements for our driver that are typically related to his behavior, we have these as well for our services. And just like we first verify the by Uber proposed driver and cancel the trip when we don't want that driver, we need to do the same with our services. There are always many ways to solve the business service puzzle, but most will be rejected because they don't expose the right behavior.