I've been a developer for pretty much ever since I got
introduced to computers. First I developed for my Sinclair ZX81, then I
moved on to MSX, made the transition to DOS in 1989 and moved along with
Microsoft to Windows. I've developed on OS2 and various flavors of Unix
and I've been using a multitude of programming languages ranging from
BASIC to Modula2 and Pascal to C, C++ and Java.
I've been to university studying Computer Science in the 20th
century and back then we were thought to design and develop for
flexibility. The reason behind it has always been that you should be
prepared for the fact that once you're done and you've delivered there
will be new releases, new requirements, changed requirements, etc.
Possibly even while you were working on the system you would be
confronted with new insights, new ideas, new and more important
requirements. Re-use was all the rage and you should prepare yourself
for changes by being able to re-use 80% of the code, of the features, of
the functionality already created and only add an additional 20% of
design, code, testing and documentation for the new stuff.
Even in the wonderful world of OO, CBD and Modularity, the beat of the
drum was about being flexible. Create a flexible solution, one that can
bend and twist all over the place to accommodate for every change in the requirements. Be it functional or non functional. The whole idea was and with many of us currently still is, that when you design or develop something it should be so flexible that a new requirement would be already implemented by 80% of the existing code.
I remember having reviewed a system that was almost entirely written in database tables. An engine would simply read a set of tables and an object model was instantiated and executed. Which is not all that unfamiliar when it concerns for example workflow engines or rules engines. But this little piece of software concerned an actual programming language in database tables. I could write a simple "Hello World" application in the database. The reasoning behind this little piece of ingenuity was that there wouldn't be any deployment stress with testing of the software etc once the application was deployed. Just updating the tables would allow for new functionality. Of course until after the first update of the tables wrecked the system because there was no syntactical or semantical checking of the 'new program'.
Flexibility is something that needs to be provided by frameworks. This is why you would create frameworks. But the thing with frameworks is, is that it is extremely hard to actually write a framework that is actually usable as a frameworks. They tend to be too restricted to a specific purpose or too generic. Most of the popular frameworks started out as a specific solution that was repeatedly implemented, based on a well known enterprise design pattern. After the 3rd time of implementing the pattern the designer or developer recognizes the parts that are consistently changing and those that remain and he'll be able to start developing the framework. This even holds true for design-time frameworks.
With the advent of eXtreme Programming and Agile project management (any flavor) the world of software development also got an overhaul. No more lengthy analysis and design phases prior to development and instead short cycles to get the product out. In the old days (which in IT means 'not that long ago') analysis and design was considered extremely important because one wouldn't want to miss a requirements, be able to prepare for the next version, build in hooks and nooks to facilitate additions without corrupting the overall architecture. But the longer the time between conceiving an idea and realizing, the further apart the idea and the realization of the idea are. Hence the short cycles of today, which still have an analysis and design phase, but shorter. And no, one can't do without analysis, design in agile or even without architecture, but that in another post.
Now, what is so dramatically different in terms of flexibility in the 21st century compared to the previous millennium? Well, the difference is in mindset of the developer. The developer and even the designer creates for replacement not for changing. By this I mean that when designing or developing nowadays, one needs to keep in mind that pieces can be ripped out and replaced by something completely else. Without heart feelings. And yes, letting go of your baby while still in its infancy is a change in mindset. Today you create something that is not supposed to last forever, but something that lasts until the next big thing. No more stone carved COBOL systems that will survive humanity.
The big difference here is that every application is a business functional framework, one that targets a specific area of business functionality but caters only for that functionality that is required. New functionality is added in a consistent manner, existing functionality is removed when no longer valid or replaced by a new piece of code when required to be changed. Mind that the key here is 'replaced'. Code is not amended to implement the change. Because it is way to cumbersome to change the code, it is better, faster and more convenient to build the new functionality from scratch and add it to this business framework. This is flexibility through agility allows IT to build business applications that are consistently 'correct' in design and easy to support and maintain.
The whole premise of agile development is to scope an iteration purely to those requirements that are immediately necessary and developable within the constraints of the iteration (timelines, budget, resourcing, etc), while not taking (too much) into consideration what might be required in the next iteration. Which basically means that whatever you create may be thrown away in the next iteration because it conflicts with the requirements of that next iteration.
The idea that in agile development there is no place for analysis and design and certainly not architecture that many agile adepts have embraced is of course a consequence of architects, designers and analysts not being able to analyze, design and architect keeping in mind that every single component in the solution should be throwawayable. They're quite often still create flexible systems instead of agile systems, hence the conflicts with the developers.
Agile systems are designed by keeping in mind that the components in the systems are decoupled, on a functional level. On a requirement level that is, which boils down to the fact that the analyst needs to define the requirements and their impact on the technology such that the requirements themselves are decoupled from each other. Dependencies between requirements result in tightly coupled implementations. When two requirements depend on each other, they should be treated as one. Consequently the designer should design the solution for each requirement independent of the other requirement. Re-use of functionality already designed for one requirement to be met should not be in the design... or at least not in the first iteration of the design. Preferably, re-use should only be taken into account in the technical design of the solution if at all in the design. Why not leave this decision to the developers?
This will allow for the assumption that what is designed today will not be there tomorrow.
So where does this leave the architect. It is clear that every iteration will have its analysis and its design phase. You cannot get around it. It is the rule of problem solving: understand the problem (analysis), think of what the solution(s) might be (design), implement the solution that is most feasible if feasible at all (development). You just cannot solve a problem you don't understand.
But where is the architect in all this. Surely you can't architect a system for which the requirements are not clear at all, they're made up and prioritized as the project goes. Or rather as time moves on. The architect defines the architectural principles that the solution with every iterations needs to comply to, to safeguard the consistency of the solution, the project's deliverables, in the grand scheme of all systems and solutions in the environment. The architect will define that at all times user interface and business logic need to be separated in design and in implementation to run on different (logical) servers. The architect will define that users of the system that are external to the company need to explicitly be authenticated using a secure token, where as users that are accessing the system through the local LAN can be authenticated using userid/password only. The architect will define that the allowed technologies for implementing the solution should be platform agnostic, allowing the solution to be deployed on Windows, Unix, Linux and OSX systems. And the architect will verify compliancy with the architectural principles prior to deployment in production.
The architect is or heads the design authority.
Without architectural principles that guarantee the integrity and consistency of the IT landscape yet support or at least allow agility in this same landscape, agile methodologies will create chaos and therefore significantly increase the TCO of the landscape.
As always, I welcome any comments. Whether you agree or disagree or just find this post informative. Please let me know where you think I am wrong or right and moreover why you think this. It will be very educational for me as well as the other readers of this post.
Thank you for reading this far.
Post a Comment
Note: Only a member of this blog may post a comment.