Sumo, Architecture, and Enterprise Agile
By Mikhail Ganchikov | January 29, 2013
Here is a riddle for you: who (or what) is both so strong and solid that it is very hard to move, and yet can move easily at the same time?
The answer: champion sumo wrestler. The other answer: well thought out application architecture.
Why are we talking about 400+ lb Japanese athletes in the context of Agile software development? Let me explain.
The opinion that Agile development projects do not require a thoroughly worked out architecture in advance (the so-called Big Design Up Front, or BDUF) is both quite common and quite controversial. In waterfall, the design phase follows the development of functional specs and precedes implementation. In opposition to that approach, the Agile Manifesto implies that the best system designs emerge from the functioning of self-organized teams. The architecture thus matures gradually throughout the development cycle, as the product evolves iteratively. The team’s knowledge of both the business and the technology grows over time, and the architecture is continuously reviewed and refactored. This process serves the goal of giving customers the software they really need, rather than a masterpiece of elegant upfront design and of little business value.
This approach makes a lot of sense to me: it just doesn’t seem right to invest in architecting for system behavior that can change significantly over the life of the application. Moreover, it is likely to give developers lots of grief as they try to keep the original design in place and cope with evolving requirements at the same time. In fact, it can result in so many complex workarounds that after a while the code may become too convoluted to understand even for the very developers who wrote it.
On the other hand, it is obvious that if no overall architectural decisions at all are made at the start, soon the code may become so complicated that it will be almost impossible to maintain and extend it. This approach is acceptable if we are willing to whip up something relatively simple and then throw it away completely (e.g. proof of concept), but not for any enduring system.
Given that the no-architecture approach will not work for developing large enterprise systems, and that coming up with the fully worked-out architecture would be un-Agile, what is the right level of design effort in an enterprise Agile project?
According to Gartner, demand for maintainable code and for tools to evaluate its maintainability is an important trend today in the world of custom application development (AD). The successful delivery of a custom application by a vendor has tended to be contractually defined as the satisfactory completion of functional user acceptance tests. As a result, applications coded to poor design standards with too much code complexity, even though able to pass UAT, can be (and often are) later found to be very expensive to support and maintain, and often too costly and slow to modify as business requirements evolve.
However, the way companies approach the calculation of outsourced software development costs is now changing. In other words, while 10-15 years ago the costs of a solution were understood as consisting mainly of the costs of actually coding the desired functionality plus other expenses such as purchasing software licenses, hardware, training, etc., today buyers of outsourced AD services are increasingly taking lifetime operational expenses into account. It is a fact that maintenance ends up being responsible for a large share of the total cost of ownership. One reason for this change is the rapid pace of evolution in the IT world (new platforms, frameworks and tools) that requires the software to be continually adapted to remain valuable to the business (the growing demand for cloud accessibility and cross-platforms support would be a vivid example). It is the industry norm that system development only stops at the end of a product’s life.
To anticipate the future costs of supporting and enhancing a system after the initial release, companies are now increasingly using metrics known as nonfunctional requirements. In cases where a third-party application development firm is hired to deliver a system, metrics like test coverage, code complexity, components coupling, response time, html page size, maximum number of simultaneous users supported, etc. may even be written into contracts and become binding to the vendor. On the other hand, these parameters of the code and of system performance, if carefully considered by the development team before the start of implementation, should result in making proper architectural decisions and selecting the right patterns that will likely remain in place until the end of the product’s life.
Consider the following questions as pointers in making the right design decisions up front:
- How easy is to change the application’s business logic? The domain model will most likely continue to evolve throughout the life of the system. Thus when designing it, dependencies between objects should be reduced as much as possible (for example, with the help of approaches like dependency injection), and similar features should be grouped within the same components to make testing easier.
- Are we locking ourselves into a specific data source? There is always a chance that the DBMS originally chosen will not remain in place for the life of the system; for example, MS SQL may be replaced with MySQL, or a requirement might come up to fetch data from external web services or XML files. If your domain objects know how to read/write data from/to a specific database, a change will require a developer to update all of them in order to add another data source. To avoid that, a data access layer should separate business model from data sources. Data mappers (like EF or nHibernate) or adapters may be used to populate entities.
- How will data be presented to end users? Apart from human beings, end users can also include other tools that consume your system’s web services, so we are talking about data presentation in general. Since the trend today is clearly in favor of building more web and mobile oriented applications that support various browsers and platforms, presentation logic should be isolated from the business model. It may make sense to use an intermediate service layer defining the system’s common operations, to be consumed by particular interface implementations. The MVC pattern would be an example of a commonly used approach.
- How complicated is it to create tests for new or existing components? Complexity usually results from spreading business logic across all tiers of the application as opposed to concentrating it in certain components of the system. If the application implements complex workflows, or if there are a lot of dependencies on data that is difficult to mock (and thus write unit tests for), then specialized integration testing tools may come in handy (BDD tools like Cucumber or Specflow could be a good fit).
In general, there is no doubt in my mind that the Agile principle of YAGNI (“You Ain’t Gonna Need It”) is valid. Keeping system design as simple as possible is a good idea, as well as deferring design decisions until the time they actually need to be made. So for example, XML parsing logic for fetching data should not be implemented until this requirement actually appears in the sprint backlog. And when it’s time to code it, the right architecture will allow you to simply add a connector for the new data source, without needing to modify the application’s behavior.
So yes, you can (and should) do some architecture before starting development. And no, you should not attempt to predict all possible use cases. Rather, the objective of the design effort should be to build enough robustness into the system to reduce the complexity and costs of future changes required to keep the software valuable to customers with constantly evolving needs. In other words, have enough structure to be strong – and yet remain agile. Like a true yokozuna. (And you win extra points if you didn’t have to Google “yokozuna”).