Scaling BDD To Large Solutions
You may be wondering why and how I arrived at most of my recommendations. The short story is that I needed an approach that scaled up and worked for large, complex, enterprise-scale solutions and delivery teams.
Many teams struggle to effectively apply BDD, mostly because of a lack of knowledge and understanding. It is easy to find examples of behaviours, especially on the reference documentation sites of the various BDD libraries and tools. Those types of examples may be fine for that tool's documentation, demonstrations, presentations or small applications, however the style of many of those examples do not necessarily scale up and work well in more complicated software solutions.
An Enterprise-Scale Example
Have you ever worked in an enterprise solution delivery team where there are multiple systems, applications and components involved in a single User Story?
If you are not familiar with User Stories or working in an Agile fashion, then just think about working to deliver a thin, vertical, end-to-end slice of functionality that can be deployed into production to produce actual business value and thus appease the stakeholders.
For example, say that there is the following User Story.
As an online customer I want to see a summary of all my incomplete orders So that I can effectively track my orders
In true Agile style, accompanying this User Story, there also would be a bunch of associated Acceptance Criteria, but at the moment that is unimportant.
This story makes the business's customers happy, resulting in higher retention rates and more business transactions. In turn, that results in measurable value to the business.
Hopefully the User Story makes sense to you — it sounds simple, right (although admittedly a little contrived!)? Unfortunately, software development in an enterprise typically is never as simple as one would like!
In this example, the technical solution that is required by an unspecified enterprise just happens to involve many different layers and technologies:
- The RESTful API Controller layer of the Web Application;
- A Service-Oriented Architecture (SOA) Business Service that integrates with a legacy Order Management System; and
- A Web Service that is exposed by the legacy Order Management System.
This is not an uncommon technical landscape!
From a behaviour perspective, there are many potential behaviours in the above example:
- The visual behaviours of the User Interface — such as hiding and showing visual elements and displaying pretty animations;
- The logic behaviours of the User Interface — such as performing screen navigation, calling the API Controller layer and handling errors;
- The behaviours of the RESTful API Controller that calls the Business Service;
- The behaviours of the Business Service that invokes the legacy Order Management System's Web Service; and
- The behaviours of the legacy Order Management System's Web Service.
Before we go any further, let me ask you a quick question. When you thought of BDD in the context of this "simple" example of showing a customer their incomplete orders, which set of behaviours from the list above did you think of originally?
In my experience, many teams tend to focus just on the behaviours of the User Interface — after all, that is the focus of the User Story. When that happens, many of the benefits of BDD are not realised, and one or both of the following two issues typically occurs.
The first type of issue is that the team may think it only necessary to write Behaviour Specifications for the User Interface component. The behaviours in this case are then generally written in the style of End-to-End tests which typically focus on exercising multiple variations, combinations and boundaries of data. Unfortunately, that approach leads to the creation of an inverted Testing Pyramid with way too many expensive End-to-End style tests, compared with Unit Tests.
The second type of issue that could occur is a lack of quality assurance and test coverage of the entire solution landscape. If the team is only writing Behaviour Specifications for the User Interface component for example, this leads to a few quality assurance and maintainability questions:
- How are the other systems and components in the solution being tested?
- What tests are being written for those other components?
- Is the Testing Pyramid being adhered to, and is testing being performed at the most appropriate level?
- Are all of those test cases valuable, valid and correct?
- Who knows about and understands the intended purpose of the other tests?
- If they exist, are these tests duplicating testing effort elsewhere?
- Who is implementing and maintaining those other tests?
- How are the other tests being managed, communicated, executed and reported on?
Perhaps some developers are being professional and are implementing automated tests for the code they have written. Perhaps their tests are valuable, or perhaps a developer has gone off on a tangent and is wasting time and money (and nobody knows!). Part of the problem is that the team as a whole does not have enough understanding and visibility over the tests (especially Unit Tests) that developers write, and that can lead to wasted and duplicated effort, as well as negatively affecting team cohesion.
To best realise many of the benefits of BDD, all the behaviours applicable to a User Story should be specified — not just for instance, the User Interface behaviours.
About Story Splitting
One way to reduce complexity and help to manage the solution delivery of a project is to ensure that User Stories are sized optimally for the team and the sprint. In some cases, User Stories can be vertically split into smaller, narrower stories. We need to remember though that a User Story is typically a thin, vertical slice of functionality with its own separate and measurable business value. This particular User Story unfortunately cannot be squeezed thinner and there is no satisfactory way to split it into any smaller vertical slices.
One could of course split it horizontally into tasks, application layers or components — but that is not a recommended approach because each split item then would not by itself deliver measurable business value.
Manipulating the size of the User Story may change the current amount of work in progress, but in any case, there still will be many behaviours involved in the solution, and these still will need to be managed effectively.
About Managing Behaviours
The idea of splitting horizontally is not appropriate for User Stories, but it is an appropriate approach for logically grouping and managing Behaviour Specifications.
Decomposing a large solution or problem domain into a number of more manageable chunks is an age-old approach that scales up and works. The approach that I recommend is to decompose a solution into a number of significant software components, and then work on the Behaviour Specifications of each component, one-by-one.
In order to prevent confusion, each component should be considered individually, and the behaviours of each component should remain separated from those of other components. The alternative, for example, would be to mix the Behaviour Specifications of both the User Interface with the behaviours of the Business Service — but that would just create confusion.
This suggested approach scales, is manageable, reduces confusion and it has repeatedly shown that it works! In addition, each software component can be individually tested and reported on, and the Testing Pyramid can be maintained.