Thursday, February 9, 2012

Day In Review

I feel as though the design of my server is starting to pay off. After I did some very useful refactorings to my server it started to feel flexible and easily extensible. I was able to add a DB Persistor fairly easily. Implementing the DB Persistor was another story.

I first thought that I wanted to serialize my objects into strings to avoid any type problems in the database. There is an awesome library named Gson which serializes java objects into JSON. The user specifies the types expected upon deserialization and it works like a charm. There was one rough spot, however, and that is with interfaces. If you serialize an object nested with references to interfaces then deserialization becomes hell. The deserializer does not know how to recreate the concrete implementation of the interface since the deserialization type only contains nested interfaces.

In the end, I avoided that mess by making my database hold references to Object. I then type-casted when pulling references back out and it worked no problem.

Tuesday, February 7, 2012

Day In Review

In the morning I continue working on my HTTP server, working towards finishing the large story of my iteration. I had everything working except for one little pesky bug which in turn stopped me from completing my iteration. Of course, after my iteration I found the problem and had everything working. When I was reading the input stream of an incoming HTTP packet I had an if statement that asked if the input stream was ready to be read. This would fail ~1/100 times which allowed it to go unnoticed for a long period of time (and allow me to pass my acceptance test). In the end I added a while loop that sits and waits until the input stream is ready to be read.

In the afternoon I had my IPM with Paul. I made my stories smaller in scope and in points in order to avoid missing an iteration again. I will also be narrowing the scope of my recent blog post 'Students, Take Notice.' The new post will focus on acceptance testing for small school projects.

Monday, February 6, 2012

(Retro) Day In Review

On Friday I spent the morning working on my HTTP Server. After 8th Light University, we had a discussion about what the University events should look like moving forward. I felt like it was a productive meeting to discuss the changing style of the event (as more and more attendees are present). We also had a quick meeting for Artisan.

Afterwards, I took a look at the Jukebox code base in an attempt to try to implement a new feature. On the to-do list was a feature request for a hammertime to pause the system upon completion. The idea being that if a standup hammertime plays then the users will not want jukebox to continue afterwards to it can be quiet for standup. I plan to revisit the little work I had on this when I have some free time.

Students, Take Note

After graduating college and becoming exposed to the large array of ideas in industry, I started to reflect on what it would have meant to have had exposure to these ideas while in school. I wonder how much time and frustration I could have avoided with three specific disciplines I have since only begun to understand. In all fairness, I did not avoid them, however; I was not actively seeking them out. I suspect most Computer Science related majors are in the same position. These ideas are not necessary to graduate. They are not even necessary to do well, school is not designed in this way. They are, however, necessary for good design and programmer sanity. What ideas am I speaking of? Acceptance Testing, Test Driven-Design and The Single Responsibility Principle.

I put acceptance testing first in the list for a reason. Acceptance testing is a little bit harder to grasp than test-driven design and this is probably due to acceptance testing being at a higher level. Acceptance tests, "act to verify that the system is behaving as the customers have specified" (1). What does this mean to a student, since they have no customers? Think of it this way: as a student, you must approach assignments as if it were your job, as if your financial stability and your reputation were at stake. Your teacher is your customer. You want to make sure your customer is happy and for that reason you want acceptance tests to assure their happiness. In fact, how can you assure yourself the highest grade possible other than by proving that you deserve the highest grade possible? How can you assure your customers' happiness other than proving that their desires are fulfilled?

Of course you are saying, 'well duh I want to get the highest grade possible.' Great. Pick an acceptance testing framework suitable to your situation. Then began to translate the requirements into a high level system requirement. Think to yourself, how will the user use the system? Let's look at a quick example from when I was in school. One of my projects involved writing a Chained Hash Map implementation. We needed to be able to add, delete, and lookup within the Hash Map. Excellent. Here is an example of high level design.



I write these first since these are the features I need to have implemented. As I develop I am actively working towards completing these high level goals one step at a time. In fact, since this is a chained hashmap let's write one more test because we thought ahead



I start at the top and work my way down. Notice how these steps are agnostic to implementation. The only way they point is towards succeeding in satisfying the client, not towards low level implementation. As I progress I write the steps in to fulfill the scenarios I have laid out. Best of all, as I move from one feature to the next I can assure that I have not broken previously working functionality. When I'm done I have assured my project's completion and my professors' (customers') happiness. Be sure to read up on your framework because each specifies their scenarios differently. Also, each framework glues their scenarios to their test code differently as well.

Throughout school I can remember having the feeling of being finished with an assignment only to begin my manual testing before submission. I would click around, type some input, and read back the output and assure that it was correct. Then, I would have this feeling of horror in my stomach when a previous feature now failed. My stress level would instantly rise and I would spend an inordinate amount of time in a loop of fixing a feature to find out, with manual testing again, that I had broken my program somewhere else. Once one gets into the habit of acceptance testing that loop changes drastically, and for the better! The stress of breaking some far part of the system is mitigated by instantaneous feedback when it happens. We instantly know if some implementation of a feature was done incorrectly because our test will tell us.

The next idea I wish I had known about in college was Test-Driven Development (TDD). This is testing at a lower level than acceptance testing. TDD is the process of testing each individual modular piece of code in the system. In fact, it's not just testing the code, but testing the code before it is written. Do not worry, it's not as bizarre as it sounds. When I begin to write a new class, or a new method I first write a failing unit test. I specify, somewhat like an acceptance test's scenario, what the module should do. The unit test is agnostic to implementation, it just checks to make sure that the implementation works. I watch the test fail. Then, I implement. If my first try does not work exactly as I intended, I immediately receive feedback on what went wrong. Why is this a good idea? Where acceptance testing is to ensure the happiness of your client, test driven development is to ensure the happiness of yourself and your group. If my test passes then I am assured that this modular piece of code conforms to my automated test's standards. Writing a test first forces the implementation that passes the test to be flexible enough to run independently in a series of unit tests. This flexibility goes a long way. In fact, when I write unit tests I don't expect the flexibility to pay off initially. I expect the flexibility to pay off over time and in ways I can not yet imagine.

I might be speaking for myself, but I can remember writing monster functions as a student. When I changed one little line in a twenty line function, it would break for one case and not all of the other cases. The changes always seemed simple enough, but they never were. If I had written tests I would immediately know what had broken and where. A suite of unit tests is excellent at pinpointing where the code changes went wrong. Couple unit tests with acceptance tests, and software's reliability and flexibility increase, and as reliability and flexibility increase, your stress level goes down since the development process starts to contain less bugs and less surprises.

The Single Responsibility Principle (SRP) should be thought of as going hand in hand with TDD. SRP is part of the SOLID principles, but I want to talk about this one principle in isolation. As a student one problem I had with code, that I can now reflect on, was large unwieldily methods and classes with far too many responsibilities. It is important to let TDD drive SRP in your code. When you write a new test, ask yourself, is this testing one responsibility? If not, how can I rethink my design to do so? The flexibility afforded by TDD is multiplied by SRP. When we divide responsibilities fairly, one per module, we do not hide side effects in our code. We explicitly handle our use cases purposefully. When we do not hide side effects we do not have surprises lying in wait. When we use TDD we can point to a single responsibility that has broken down in our system and know with confidence that the broken test is only doing one thing, it only has one responsibility.

SRP is another tenet of flexible design. When we break apart our responsibilities we allow ourselves to change any one use case rather easily. We isolate the responsibility to change. We find our test for the responsibility and change the assertions to match our new use case. We then watch as the test fails. We find the corresponding module to change. We know that the use case is safe to change because it is isolated. We pass our test. We run our unit test suite and watch as they pass. We run our acceptance test suite and watch as they pass. All the while we can think about the surprises and stress we had avoided.

1) Martin, Robert C. Agile Software Development. New Jersey: Pearson Education, 2003. Pg 13.

Day In Review

The day started with the bi-weekly Hosemonster IPM. Thankfully, the iteration is slow enough and laid back enough for us apprentices to take part and have the opportunity to learn. It's nice to both take time to learn about process, good code, and simultaneously deal with a live client. I enjoy the feedback of an IPM because the direction of the application comes forward while at the same time the client and team work out the details of the big picture. It's nice to see both of those perspectives come together at one time.

After the IPM I attacked a bug in the PDF generation for Work Orders. When the app split into two apps (Hosemonster and Limelight-Hosemonster-UI) there was a weird break in the templating for the PDF generator. I felt accomplished when I tracked the bug down because I was able to restore the previous functionality and remove a superflous function from the namespace. The solution felt cleaner and I was proud of that.

I also paired with Wei Lee on a big of refactoring in preparation for plugging in the graphs we had been working on for quite some time.

Friday, February 3, 2012

Day In Review

My HTTP Server's refactoring has been going fairly well. Currently, the Packet and PacketParser classes are messy and I am slowly refactoring their behavior. I'm also pulling the responsibility of generating the return string of a packet out from the Packet class. I plan to let the Packet be only a hash of HTTP packet attributes. I will then use the presenter pattern and make a Packet Presenter to generate the outgoing packet.

I've also run into some trouble. Before, I kept my in memory data store on the top level of the server. Now, I need to find a different way to persist the state of Tic-Tac-Toe games since the top level is agnostic to the implementation and can no longer hold the data store. I've been thinking over the possibilities, but it's hard to choose something to refactor towards. Once I have my Packet workflow refactored I will be tackling this decision.

Wednesday, February 1, 2012

Day In Review

I completed my second apprenticeship iteration today and started a new one. This week I will be contributing three points to hosemonster and the remaining time will be spent refactoring my HTTP server from last summer. Currently, the server is integrated with my Tic-Tac-Toe game. I had thought that it was heavily intertwined, but I'm finding that to not be the case. I'm missing a few key server abstractions, but the refactoring is going fairly smoothly after one day.

One atrocity of the code base was the status of my test. I ran the tests when I first pulled the project and they froze. Why did they freeze? They were waiting for STDIN input to move forward with a specific tests. This was no good, so I started handrolling mocks and fixing the state of my tests. After I felt comfortable with the tests in place I moved towards refactoring.

I've been thinking about a way of metaphorically naming my server components at a high level. One idea I like is to use business metaphors, as if the server were an office. The forward facing ServerSocket piece would be the 'Receptionist.' The receptionist would pass the socket off to a 'Middleman,' which I was originally calling the Dispatcher. The middleman would then pass off to the 'CEO.' The CEO metaphor probably won't stick, I don't like it. The CEO is currently the high level thought relating to the interface to the business logic that is ignorant of the framework. The naming scheme is a work in progress.