My previous findings regarding the testing of public methods versus private methods opened the door to allow me to better understand that the mindset of a developer performing unit "testing" is different from the mindset of a developer "specifying program behaviour through the mechanism of automated test methods". The difference is subtle but important.
In the concept and terminology of Behaviour-Driven Development, Design, Analysis and Programming, the resulting program code is simply an implementation of previously specified behaviours. Those said behaviours are initially defined at the business level via the process of requirements gathering and specification, and can be expressed in the business's language as a set of scenarios and stories. These requirements can then be further analysed and broken down into logical and physical program specifications.
Regardless of whether a test-first approach to development is taken, by the end of the construction task, there should be a set of test cases (automatic and/or manual) that were successfully executed and that covers all of the logic specified in the program specification. In other words, by the end of the construction task, the developer has verified that the program behaves as specified.
Typically a program's specification is written in a document somewhere which is divorced from the location of the actual code - so great discipline and effort is required in order to keep the specification and code in synch over time. Depending on the company and financial constraints of the project, this may not be feasible.
However, what if we could remove this boundary between the document and the code?
Automated test cases serve a number of useful purposes, including:
- Behaviour verification;
- Acting as a "safety net" for refactoring code;
- Regression testing;
- Professional evidence that one's own code actually does what one has said it does; and
- Acting as sample code for other developers so they can see how to use one's code.
What if the automated test assets were the actual living program specification that is verified through the mechanism of test methods?
This concept has been recognised before, and the libraries JBehave, RSpec, NBehave and NSpec have been the result. While I honour the efforts that have gone into those libraries and the ground-breaking concepts, I do not necessarily like the usage of them.
In fact, all I want right now is to be able to express my specifications in my MSTest classes without the need for a different test runner or the need for another testing framework. In addition, I want other team members to easily grasp the concept and quickly up-skill without too much disruption.
While a DSL might be more ideal under these circumstances, I write my automated test assets in the C# language. Working within that constraint, I want to express the program specification in test code. With special note to the agreeable BDD concept that "Test method names should be sentences" I devised the following structure for expressing program specifications as automated test methods.
public class [MyPublicMethodUnderSpecification]Spec
public class Scenario[WithDescriptiveName]
public void When[MoreScenarioDetails]Should[Blah]
// Test code here
public void When[AnotherSubScenario]Should[Blah]
// Test code here
This structure allows for the naming of the public method that is under specification (the parent class name), a general scenario with a description of the context or given setup required (the nested MSTest class name), followed by the test method names which further narrow the scenario and provide descriptive details of the expected behaviour.
The test names look neat in the Test View in Visual Studio (when the Full Classname column is visible), and are readable just like their corresponding stories.
The full class name in this instance would look like:
.[MyAssembly].BehaviourTests.[MySystemUnderSpecification]Spec. [MyPublicMethodUnderSpecification]Spec +Scenario[WithDescriptiveName]
With automated spec/test assets/classes structured in this fashion, instead of developers performing unit "testing", developers can instead embrace the mindset of "specifying program behaviour through the mechanism of automated test methods". In turn, developers should be more focused on their tasks and it should also lead to higher quality code bases.
Oh, and I haven't even mentioned how much easier it would be for a developer new to a project to learn and understand the code base by looking at the test view and reading all the scenarios that have been specified and coded...
If you wanted an even better way to define your program specifications and test assets, then I highly recommend SpecFlow.