Introduction
I read and re twitted the post by Chris Fox titled “Test-Driven Development is Fundamentally Wrong” in Hacker Noon. In the past few months, I have been reading multiple articles a day about TDD. It seems to me that most people (i.e., software engineers and software managers) have heard about the benefits of using TDD, but it all collapses when understanding and applying the proper concepts or fundamental nature.
A couple weeks ago I was talking with a software engineer friend of mine. I am not sure if he, his team or his company at large practices TDD. He read my Re-Space posts and sent me a message (few readers leave messages in my blog, most prefer to send a message via e-mail) mentioning that I was not using TDD because my code did not include unit tests. I replied that he was correct on the part of unit testing, but my point is that TDD does not require unit testing.
To me unit testing in TDD, stand ups and many other rituals practiced by the different incarnations of Agile make no sense and most important were not part of the essence of TDD or Agile. Someone attempting to sell a group or a company into practicing Agile came up with it and it stuck. I am not going to get into Agile at this time. I will write a different post and hopefully a white paper regarding Agile in the near future.
My Background
After school, I started working at a Fortune 100 company. Initially I worked in the development of plain paper copiers and fax machines. Then I moved on to develop commercial systems that capture, store, query and retrieve objects for different vertical markets (i.e., graphics, insurance, medical among others). For a few years I developed software for embedded systems using assembly language and C. After that I moved into developing commercial software in Pascal, C++, C# and Java using different operating systems (i.e., Apple, Linux, UNIX and Windows).
No matter what the task at hand was, I always try to start with a first pass of Architecture, Requirements and Design documents no matter how short they were. I strongly believe in multiple and complete iterations when it comes to software development. Some time ago I created a software development methodology which I named CDP (it stands for Cyclic Development Process). I wrote several papers on it and worked on a book. Due to different life challenges and options I kept on postponing the book. CDP was started in the early 1980s.
For many reasons (i.e., polishing and changing requirements, business changes, technology advances, just to name a few) which fall outside the scope of this post, software needs to be developed in an iterative fashion.
Hardware and software delivery platforms change, programming languages improve and there are always new ones, new development platforms and toolkits seem to appear every month, libraries are growing and becoming more powerful. That said; in general software being released seems to be getting more and more complex and tends to have many issues, some of them considered serious.
I believe that most of the issues could be addressed if software developers use the proper methodology based on the task at hand and spend more time thinking about it. I am not referring to “analysis paralysis”. Short, simple and useful documentation kept up to date tends to help. I do not believe in spending weeks learning the code enough to be able to start contributing to the code base. Always aim to develop the MVP. Make sure it works as far as you can tell and then pass it to a different developer to create and verify the software.
TDD in Action
This is how I develop software using TDD. I document requirement(s) in a short document which in most cases contains one diagram (today I use Microsoft Visio, when I started I used MacDraw). Today my documents are written using Microsoft Word (years ago I used MacWrite).
As an example, let’s say that the requirements are to implement an API to retrieve the number of objects locally stored in the instance of the storage server (an existing product). As a side note, the storage server is able to go to peers and request data. In our case let’s assume the requirements only call to retrieve the information for local objects.
Once I have thought about the requirements to the point I can articulate them, I start with the design. For that I add a page or two in the SDD (short for Software Design Description) document. If needed, I refer to an existing diagram in the document, or add a new one. I try not to reinvent the wheel when possible.
OK, now I start with some code. The server in question supports the same API in two different versions, one via sockets and the other via HTTPS. Let’s go with the socket implementation.
I would create a command line interface (CLI) or console application which will accept arguments that would be used to return the requested information. In this example, we need the number of objects currently stored in the instance being queried. After a few minutes we could have a shell that does nothing. At this point I would write some comments in the CLI that would include parameter validation (i.e., count, format). We could then leave a comment for the actual call followed by a check on the returned value.
I would then parse the returned value to verify that it makes sense. If it does not, I would like to return an error (i.e., status code or throw exception). The function that makes the call could return a status code. I would set it to an error, a warning and then a valid value.
Now it is time to make the socket call. I would use the socket library that encapsulates the calls we use. Will pass correct, incorrect and missing parameters to make sure the code behaves as expected. Note that at this time the storage server does not know about the command we are issuing.
I would then implement the call which would pass the request to the server. Of course the server would not know what to do with this new request. I would expect for it to return the proper response indicating that the command I issued is not known. The server should continue to work (should not crash or stop receiving and processing other valid requests). I understand that the server, at this moment, is not of my immediate concern. If I observer an unexpected behavior from it, I must issue a bug report.
I would then move to the server software and start modifying the code from when the request type is checked. I would make sure that it starts processing and failing as I go.
In the next couple hours I should have progressed to the point of having a shell function/method that would respond to the request. At that time I would return incorrect values and incorrect status. This way I am continuing to test the CLI and all the way up to the server. This is when iterations are important. They make you think about what can happen and how can issues be handled. Note that you should follow the architecture for the product. If the architecture calls for making all checks in the CLI, or in the server, or in both, you must adhere to it.
I would then move to the implementation or updates to the method(s) in the storage server. I could check the permanent storage using information from both the database and file systems, only database or only file systems. Given that I have been working with file systems and databases I tend to understand the limitation of each approach. I could use the database and query the proper table(s) and return the number of local objects. That would require a database query which after looking in the documentation does not seem to be available.
The second approach could be to use the file system. I can get the amount of space from the directories holding the local copies. Whoops, the requirements do not need the space used but the count of files. I should design and implement this feature keeping in mind that future calls might want to get the used disk capacity. In actuality, we have a library that I developed to get the count of files starting with any directory/folder in a file system. Finally I could use a combination of database and file system support to collect and perhaps verify the required information before packaging and sending it back to the requester (the CLI).
I pause to think and decide that the most efficient and fastest approach would be to use a database query (think about MVP). I can write a SQL query like:
SELECT count(*) FROM [sdmsql].[dbo].[BITFILE_TBL] where state in ('I', 'R');-- 365
Now I have to write a function / method for the DBHigh.dll and one for the DBLow.dll libraries. The DBHigh library contains high level database methods. The DBLow DLL contains matching methods for the high versions but they are optimized for a specific database engines (e.g., SQL Server, MariaBD or MySQL).
I would start by developing the DBHigh entry using a similar approach as we did for the CLI. Build a shell, checking arguments and returning errors; implement the call in DBLow which will initially fail. The process repeats in DBLow until it returns what we believe are proper values.
It is my experience that designing, implementing and verifying operation of the database calls would take me to the end of my work day. When done I will head home or if things move faster than expected will continue on this or a different task for the remaining of the work day.
At this point we seem to have a CLI and the mechanism in the server to get proper values back. We need to feed some objects to the server making sure they match the count in the database. As data is migrated to other instances of the archive, the stored objects may or may not be deleted from the server instance we are querying.
While I am feeding objects to the archive, I would invoke the CLI and get the count. I would also have SQL Query Analyzer running to compare results. While running tests with the CLI, I would figure out if the code is clean, lean and properly documented. I could spend from two to four hours checking and creating problems (e.g., killing the database server, deleting all objects, etc) to make sure that both server and client do not loose data and return descriptive errors or the correct information.
At some point, while cycling between testing and updating the code, all will start to work fine. I would update documentation as needed. Then I would move to the HTTPS implementation.
When that is completed I would pass the documentation and code to a software engineer to design and implement some unit tests. I expect that the engineer will create unit tests for the mechanism that receives and process the request, the database high and the database low. If issues are encountered I would address them and let the other engineer repeat / enhance the unit tests until we both are satisfied.
In my experience, unit tests will in most cases pass because of the way the code was developed. Now and then something might have escaped my mind. This is why you always need a second pair of eyes. If you are developing software using a Pair Programming, then a third person should implement the unit tests. It is a bad idea to test your own code past the design and implementation phase.
What TDD is NOT!
TDD is not writing unit tests for code that you have not written hopping that you will write it. In most cases testing is quite simple and the actual production code should check arguments and log findings to permanent media. When developing production code we always include a mechanism to turn on debug statements in any module. This allows us to debug unforeseen issues.
The idea behind TDD is to think about what can go wrong during development to make sure that the code is resilient to such issues. To get more time thinking and implementing, develop your code in complete cycles. I recall once a QA engineer which wanted to see what would happen if the computer running the storage server was off. Our CLIs time out if the socket does not receive a response in a specified default time. We also are able to ping the storage server to make sure the computer is alive. In addition, the storage server has the API CASPing() that checks if the actual service is up and processing requests.
When debugging a distributed system it is also good to be able to check software versions. We have implemented API calls to get the software versions including the build date of different modules. We added the date because in some cases a piece of software may be built (and patched) with the same version on the same day. We can also get the software signature for the installed version in case there are further doubts on what software is executing in a machine. You never know what a clever customer or system administrator might do in the field.
Agile Software Development
This section deals with the book by the same name by Robert C. Martin. Lately I have read so many articles about TDD and a few of them mention Robert Martin. Given that I have a copy of his book, which I typically use as a reference for design patterns, but it does have a few short references to TDD. I went to the index and found test-driven development with six references and test-first design with two. I would not consider the book a definitive guide for TDD, but there is one by Kent Beck which I do not own but have read many years ago. The copyright notice is dated 2003.
Martin does mention that by starting with a blank slate and adding unit-tests you will produce better software. Such claim need to be backed up by large amounts of data collected from many different organizations over the world. I disagree with such assertion. Unit-tests are just an artifact that used with moderation will help to test code which in most cases will produce a better product. When you refactor and are lucky that you have not changed the expected results of all unit-tests, then you could assume all is well. If one or more unit-tests fail, then in most cases the code has problems, but not always. Allow me to explain.
For starters, it is not a good idea for a developer to write its own tests. The reason is that one will probably make the same mistakes in the unit-tests as in the actual code. It is a much reliable process to have a different software engineer work on the unit tests without looking at the source code. The best tests should always be done against requirements. That way one can find issues that just by looking at the code will probably miss.
In most cases unit-tests take time and resources and need to adapt to the changing software. Of course if you are writing software that is mostly made of one-liner functions/methods it is in general a waste of time. Testing should be done at some higher level to make the effort reasonable and effective. I have seen so many times unit-tests that are longer and more complicated than the function/method being tested. In my book that represents a waste of time and resources and do not guarantee improvements on the quality of the product.
Proper Implementation of TDD
The essence for TDD as a methodology came up around 2000. There was a software development methodology called Extreme Programming (XP for short). The idea was to iterate through the different steps found in the waterfall model in order to be able to address changes in requirements.
Different diagrams have been developed and edited in order to refine the approach to the point that it looks like Agile. In my opinion XP had good concepts and it was somewhat early to be accepted by software developers and managers. The issue with Agile and XP is that they are full methodologies. TDD is just an approach on how to develop code. It does not address the validity of requirements or design and associated documentation.
CDP predates XP by at least five years. CDP was a full software development methodology. It covered all aspects from inception of idea(s) for the product, architecture, design, implementation and testing. At the core of the methodology was the concept of constant change in requirements. Requirements need to be refined during the development process. The following is an edited version of a diagram from the CDP book:
In the diagram the yellow circles represent the goals for the project (requirements interpreted by the customer). The red circles represent the goals (requirements interpreted by the development team). At the end of the first cycle for many reasons the goals are quite distant. At the end of the second cycle the goals are much closer which represents progress. At the end of the third and final cycle the goals or expectations of both customers and developers have converged in the delivered product.
In a typical project a cycle is completed in no more three to four months. Projects should not exceed a year in development. Business and technology change rapidly and opportunities, in most cases, will not be met if the project last several years. It is better to introduce the software as soon as the first minimum viable product (MVP) is operational. Further projects can improve in the product until it becomes obsolete.
This post is not about CDP so I am going to switch gears. I will generate a post on CDP as soon as I complete the one on Agile. The take away is that requirements change and the software development methodology needs to manage change. The best way is to cycle in short periods of time (one week) and deliver to the product expert the software in order to make sure it meets expectations. If not, cycle from the requirements and repeat keeping in mind that time and money resources are limited to the project schedule document.
TDD does not deal with change. It deals with having the developer think, understand and evolve the specific requirement in order to implement the essence in the most simple, performant and elegant way. A process that helps thinking and validating assumptions is to develop the code using a top down approach in a cyclical way. Have a part (test) make a request to another (actual function/method) and initially fail (there is nothing on the other side to respond to the request). Iterate implementing the actual code. As one is developing the software both the server and client will improve. These cycles can take from several minutes to several hours. At the end of the week the developer will have one or more completed targets. You can call this micro Agile or micro CDP or TDD. The essence is to start with a blank slate and by iterating in minutes, hours or days, both the client and the server modules one will produce the best possible implementation of the requirement at hand given the available resources.
Unit, Module and System Testing
As soon as the expected results are reliably obtained, the developer should put the target function/method in the hands of a different developer to take care of generating reasonable unit-tests implemented.
Why should the test be generated by a different engineer? As we previously mentioned, if the initial implementation approaches the requirements incorrectly, the same will happen with the tests. They will verify that incorrect software is properly tested and works. The time has doubled and it is a total waste. The reason is that the same or a different engineer will have to design and implement the proper code that matches the actual requirement.
Have you ever thought why there is a QA (short for Quality Assurance) team? It is not to offload the perhaps boring testing process to a less experienced developer in order to save money. The reason is to have someone else look at requirements and decide on a short set of tests that the requirement has been implemented properly. If the engineer that developed the code tests it, incorrect implementations will eventually reach the hands of the customer.
Conclusion
TDD is not starting with unit testing and then implement code so the associated function/method which at such point does not exist, can pass it. It is about starting with a blank client/caller and server/receiver implementation and cycle through until both reliably return the expected results. In the process the format for the calls and responses may and in most occasions will change to improve. What is important are the iterations in order to gain experience with what is going on and determine if the requirements are met and perhaps exceeded.
Care should be taken not to spend time exceeding the requirements because that is just a waste of time. What needs to be addressed is that both client and server are solid and reliable. For example if all tests are done with positive values, perhaps test with zero or negative values should be tested. What if instead of sets of just a few objects used by the client, the caller also uses sets with hundreds, thousands or millions. Perhaps things become unresponsive when processing hundreds of items. That would signal the need for a better algorithm. Before proceeding one should look for the bottleneck on the software.
These and many other reasons should come to mind and addressed when using TDD. Use the essence of TDD and not just check that you have an associated unit-test developed prior to the implementation of the function/method under development. That is not what TDD is about.
One more thing, most people believe that TDD is the silver bullet that if you practice it, the resulting code will be perfect. Remember that there is no silver bullet. Different tasks may require different approaches and tools. Make sure you think about the requirement at hand and utilize the best possible approach and tools to implement it.
As Usual
If you have comments or questions regarding this, or any other post in this blog, or if you would like for me to serve of assistance with any phase in the SDLC (Software Development Life Cycle) of a project associated with a product or service, please do not hesitate and leave me a note below. If you prefer, send me a message using the following address: john.canessa@gmail.com. All messages will remain private.
Keep on reading and experimenting. It is the best way to learn, refresh your knowledge and increase your development toolset!
John
Follow me on Twitter: @john_canessa