2025 Public Training Schedule
January 14 – 17, 2025 – Agile Analysis and Design Patterns – Half-Day Sessions
(c) 2024 To Be Agile
I see continuous integration as a “gateway practice” because it will lead to other Agile practices.
Continuous integration and automating the process of validating release candidates represents a large part of the Agile vision realized. We spend nearly half of our time and effort integrating and testing code. Many companies do this manually, which makes integration and last-minute changes exorbitantly expensive.
Automating the verification of release candidates offers a way to not only drop the costs of last minute changes but also significantly boots our confidence in the software that we build. A fast, reliable continuous integration server is at the heart of every successful Agile software development implementation that I know of.
But automation and continuous integration depend upon having the right tests. You can’t just write any tests to make continuous integration work and you can’t write the kind of tests that quality assurance writes in order to get good automated regression tests from doing test-first development.
Quality assurance is still an absolutely critical component of development that sits on top of the suite of unit tests the developers build when writing their systems using test-first techniques. When we do QA we think about what might go wrong but when we do test-first development we have a completely different mindset and focus instead on how we create tests that elicit the behavior we want to build. This is a very different way of thinking about tests and testing than the QA mindset.
Testing seems like an easy and obvious thing to do but what we do when we’re doing test-first development is not testing. We’re defining behaviors using tests and this is a very different way of thinking about using tests.
So what do I mean by the “right tests?” First of all, tests must be unique. If you think about writing specifications, you wouldn’t repeat the same sentence over and over in the specifications document, so why would you repeat the same tests over and over in your code? If our tests are to have value and allow us to more easily refactor our code safely then we surely want to make our tests unique so that if something changes in our code we’re not changing multiple tests.
Unit tests should fail for a single reason. When I see a test fail in my unit test runner I wanted it to tell me exactly where the failure happened. If my test covers too many things then when it fails I won’t know the reason that it failed without further investigation. Therefore, when something fails I want only one or a few tests to fail rather than a whole slew of tests failing because this will help me identify where the problem is more quickly.
Making only one test fail for a single issue can be challenging. Sometimes issues are interdependent. For example, if I can’t get access to data that I need in a database then my data access query would fail but all of the other tests that depend on that data would also fail. In this scenario, several tests are failing however there is only one point of failure, which is the first test failure that can’t retrieve the data from the database. The other tests that depend on that data are not telling us about other problems in the system but rather they fail because of the true dependency in the system and so when we analyze these failures we look at the first failure and recognize the other failures are due to natural dependencies on the first failure.
Most importantly unit tests are implementation-independent. By that I mean we can refactor our code and without changing the behavior, none of our unit tests should break. I’ll talk more about implementation independence in my next post.
Previous Post: « Avoid Long-Lived Branches
Next Post: On Implementation Independence »