Monday, February 6, 2012

Students, Take Note

After graduating college and becoming exposed to the large array of ideas in industry, I started to reflect on what it would have meant to have had exposure to these ideas while in school. I wonder how much time and frustration I could have avoided with three specific disciplines I have since only begun to understand. In all fairness, I did not avoid them, however; I was not actively seeking them out. I suspect most Computer Science related majors are in the same position. These ideas are not necessary to graduate. They are not even necessary to do well, school is not designed in this way. They are, however, necessary for good design and programmer sanity. What ideas am I speaking of? Acceptance Testing, Test Driven-Design and The Single Responsibility Principle.

I put acceptance testing first in the list for a reason. Acceptance testing is a little bit harder to grasp than test-driven design and this is probably due to acceptance testing being at a higher level. Acceptance tests, "act to verify that the system is behaving as the customers have specified" (1). What does this mean to a student, since they have no customers? Think of it this way: as a student, you must approach assignments as if it were your job, as if your financial stability and your reputation were at stake. Your teacher is your customer. You want to make sure your customer is happy and for that reason you want acceptance tests to assure their happiness. In fact, how can you assure yourself the highest grade possible other than by proving that you deserve the highest grade possible? How can you assure your customers' happiness other than proving that their desires are fulfilled?

Of course you are saying, 'well duh I want to get the highest grade possible.' Great. Pick an acceptance testing framework suitable to your situation. Then began to translate the requirements into a high level system requirement. Think to yourself, how will the user use the system? Let's look at a quick example from when I was in school. One of my projects involved writing a Chained Hash Map implementation. We needed to be able to add, delete, and lookup within the Hash Map. Excellent. Here is an example of high level design.



I write these first since these are the features I need to have implemented. As I develop I am actively working towards completing these high level goals one step at a time. In fact, since this is a chained hashmap let's write one more test because we thought ahead



I start at the top and work my way down. Notice how these steps are agnostic to implementation. The only way they point is towards succeeding in satisfying the client, not towards low level implementation. As I progress I write the steps in to fulfill the scenarios I have laid out. Best of all, as I move from one feature to the next I can assure that I have not broken previously working functionality. When I'm done I have assured my project's completion and my professors' (customers') happiness. Be sure to read up on your framework because each specifies their scenarios differently. Also, each framework glues their scenarios to their test code differently as well.

Throughout school I can remember having the feeling of being finished with an assignment only to begin my manual testing before submission. I would click around, type some input, and read back the output and assure that it was correct. Then, I would have this feeling of horror in my stomach when a previous feature now failed. My stress level would instantly rise and I would spend an inordinate amount of time in a loop of fixing a feature to find out, with manual testing again, that I had broken my program somewhere else. Once one gets into the habit of acceptance testing that loop changes drastically, and for the better! The stress of breaking some far part of the system is mitigated by instantaneous feedback when it happens. We instantly know if some implementation of a feature was done incorrectly because our test will tell us.

The next idea I wish I had known about in college was Test-Driven Development (TDD). This is testing at a lower level than acceptance testing. TDD is the process of testing each individual modular piece of code in the system. In fact, it's not just testing the code, but testing the code before it is written. Do not worry, it's not as bizarre as it sounds. When I begin to write a new class, or a new method I first write a failing unit test. I specify, somewhat like an acceptance test's scenario, what the module should do. The unit test is agnostic to implementation, it just checks to make sure that the implementation works. I watch the test fail. Then, I implement. If my first try does not work exactly as I intended, I immediately receive feedback on what went wrong. Why is this a good idea? Where acceptance testing is to ensure the happiness of your client, test driven development is to ensure the happiness of yourself and your group. If my test passes then I am assured that this modular piece of code conforms to my automated test's standards. Writing a test first forces the implementation that passes the test to be flexible enough to run independently in a series of unit tests. This flexibility goes a long way. In fact, when I write unit tests I don't expect the flexibility to pay off initially. I expect the flexibility to pay off over time and in ways I can not yet imagine.

I might be speaking for myself, but I can remember writing monster functions as a student. When I changed one little line in a twenty line function, it would break for one case and not all of the other cases. The changes always seemed simple enough, but they never were. If I had written tests I would immediately know what had broken and where. A suite of unit tests is excellent at pinpointing where the code changes went wrong. Couple unit tests with acceptance tests, and software's reliability and flexibility increase, and as reliability and flexibility increase, your stress level goes down since the development process starts to contain less bugs and less surprises.

The Single Responsibility Principle (SRP) should be thought of as going hand in hand with TDD. SRP is part of the SOLID principles, but I want to talk about this one principle in isolation. As a student one problem I had with code, that I can now reflect on, was large unwieldily methods and classes with far too many responsibilities. It is important to let TDD drive SRP in your code. When you write a new test, ask yourself, is this testing one responsibility? If not, how can I rethink my design to do so? The flexibility afforded by TDD is multiplied by SRP. When we divide responsibilities fairly, one per module, we do not hide side effects in our code. We explicitly handle our use cases purposefully. When we do not hide side effects we do not have surprises lying in wait. When we use TDD we can point to a single responsibility that has broken down in our system and know with confidence that the broken test is only doing one thing, it only has one responsibility.

SRP is another tenet of flexible design. When we break apart our responsibilities we allow ourselves to change any one use case rather easily. We isolate the responsibility to change. We find our test for the responsibility and change the assertions to match our new use case. We then watch as the test fails. We find the corresponding module to change. We know that the use case is safe to change because it is isolated. We pass our test. We run our unit test suite and watch as they pass. We run our acceptance test suite and watch as they pass. All the while we can think about the surprises and stress we had avoided.

1) Martin, Robert C. Agile Software Development. New Jersey: Pearson Education, 2003. Pg 13.

No comments:

Post a Comment