Wednesday, February 29, 2012

Day In Review

The Iteration Planning Meeting for the project I am joining happened this morning. It served as a good introduction for me to the app that I will be working on shortly. I was able to see many of the workflows available and familiarize myself with the features that are in the pipeline.

After the iteration meeting I continued working on my payment API wrapper. I finished the basic functionality needed to plug into the app, that is to say it accepts check payments. I plan on starting to include the API wrapper tomorrow.

Tuesday, February 28, 2012

Day In Review

I continued wrapping the payment processing API today. I ran into come issues with my integration test post-refactoring which lead me down a small rabbit-hole. Thankfully, I was able to pull myself back out before I lost too much sanity. Afterwards, it felt good to end the day totally green (specifically with unit and integration tests). Tomorrow, there is an iteration meeting and I am excited to introduce myself to the team by showing some of the functionality I have been working on.

Since the team uses JRuby, I've been running my test suite against JRuby before commits. I would develop in JRuby but it feels too slow for quick feedback loops (MRI, however, is blazingly fast). I feel as though clojure can evaluate at a quicker pace than JRuby and I'm curious to benchmark this to see if my feeling is true. If it is true it might be worthwhile to investigate the differences and see how they utilize to the JVM (I've been interested in the JVM lately for a variety of reasons).

Monday, February 27, 2012

Ruby's Functional Programming

Ruby is a fully object oriented language and is, in fact, so dedicated to objects that every entity in the language is an object. After hearing that statement it might seem a little weird that ruby also has functional support. Ruby's functional aspects are powerful and complete which makes it worthwhile to learn. We can invoke operations such as reduce and filter with the use of code blocks. We can also pass functions around directly with the use of lambda and Proc objects. In this post I'll show you some of the more powerful tricks, show their potential pitfalls, and document the experience with code to play with.

Each method call can be given an additional code block. If you don't believe me then put a block on the end of every method call that currently does not have one and watch as (almost) everything still works as intended (please don't really do that). I'll introduce code to show this, but first let's see a straight forward imaginary workflow:

Now get_name just returns the name attribute of an object, as you might guess. If we wanted to use this to get the name of an employee we would first find the employee and then pass it in here as a parameter as we have done above. Let's show the same workflow, but this time let's allow the code block to give use the name:

Cool right? We can build any employee interaction we want off of get_employee using code blocks. This is an imaginary case, however, and code blocks aren't always the best option so use them wisely.

Code blocks are a part of some of the standard library's methods. It allows us to make use of functional ideas in ruby code. For example, let's look at an inject that sums all of the elements given multiplied by two.

These are powerful expressions because the intermediary steps (i.e. the summations and a single element's multiplication by two) are stateless. We simply put data into the inject and a single answer comes out with no side effects. Other such functional actions include (but are not limited to) reduce, collect, and reject. Ruby's Enumerable has lots of functional methods.

The last functional item I want to share is the use of closures. In Ruby we can make use of lambda, Proc, and method(:name) to create closures. They all appear to be very similar, but have subtle differences. For the sake of learning we will ignore the subtle differences and use Proc to explain the concept. Procs are objects that encapsulate executable code. With a Proc we can bundle up some code and pass a Proc object around until we are ready to call it. For a simple example let's look at the following:

This should feel very similar to the code blocks we had discussed earlier. This is because code blocks are a type of closure! Think of Proc's as a code block that can be held for later use. Let's explore closures a little more:

What happened here? We invoked gen_bar in the Example::Bar object and therefore Bar.new should invoke a new Example::Bar, right? Wrong! Procs are always evaluated in their declared scope. That means that in this case the Proc was executed in the context of module Foo even though it was called in module Example. This is something to keep in mind as closures are passed between classes and modules.

Functional concepts in Ruby can make coding easier, cleaner, and more expressive. It's important to understand the concepts in order to use them correctly when a problem being faced could use a functional solution.

Day In Review

I began implementing an API wrapper around a payment processing and verification service. The workflow involved building up the required xml, shipping it off, and then making assertions based upon the return values. The code can get overly complicated if one is not careful to separate concerns and build out modular pieces. The code itself is meant to abstract the API away from the main app in order to assure loose coupling.

I generated integrations tests when building. Thankfully, the service offers a sandbox server to test against. This meant that as I built the functionality I could fill in steps for the integration test to follow up to and including the interaction with the sandbox server.

Sunday, February 26, 2012

(Retro) Day In Review

I've been without internet this weekend since I moved to a new apartment and am yet to have the cable guy visit. Before I start I need to give a quick shoutout to the Starbucks on Broadway and Sheridan for their wifi :) .

Friday morning I put some more time into the new internal 8th Light application I have started. The start of this project is exciting because there are many new idea that run counter to the canonical rails development. I've enjoying playing with some ideas and letting them fall into place with a red/green building and refactor cycle. Some of the abstractions took two or three attempts to feel right, but I am starting to finish up some of the beginning features and pass acceptance tests. Moving forward I will be finishing the first iteration's features and take a little bit of time to put a polish on the UI so it is not bland html.

Friday evening I had the opportunity to visit the University of Chicago for a hack night. It was a great experience because myself and two other 8th Lighters were able to share ideas with the students. I enjoy any opportunity where both parties are excited to share ideas two-ways and this was one such opportunity.

Thursday, February 23, 2012

Day In Review

I continued working on the interactions between columns on the story board. There are many server and client side modules that all serve these interactions so the change I want to make alongside the decoupling I am doing as I go is getting tough. Fortunately, I was able to leverage the power or prototypes to add metadata to objects that were subclasses of backbone objects. This means that I can retain the inheritance and those benefits with attached metadata that lives outside of that child-parent object structure.

I also paired with Paul on an autocomplete feature. We ran into some walls in development, but I believe we were on the right track. One takeaway I had was to select your dependencies carefully. If you include a dependency that itself is a collection of dependencies don't then include a single dependency that clobbers what exists in the collection. It's important to be cognizant of the ramifications of included dependencies.

Wednesday, February 22, 2012

Day In Review

Interactions between story board columns is rough and I am beginning to think that the architecture is going to need to change for the story board backbone views. Instead of having 5 story collections it will all be condensed into one collection that each of the five column views shares. Then, we can attach some meta-data (that will not synchronize with the server) to each story and let the column views use the meta-data to decide which stories to accept and reject when rendering. Although I liked the five ignorant collections with one per view it is not useful for multi-view interaction and every spiked solution I have attempted so far has felt very hacky.

I also planned the beginning iteration for Abbacus today. I spoke with craftsmen and apprentices about the architecture of the application and what they thought would be good practice when starting.

Monday, February 20, 2012

Decision Deferment

We have all seen the carnival ride where the riders are in a circle and the ride spins them around incessantly. The riders see the same distorted, blurred images flash past them over and over again. By the end of the ride the images are familiar, even in their blurred state. Eventually, the experience takes its toll and the riders begin to feel nauseous. Some riders feel this discomfort more than others, but all riders are aware of the feeling. Code can feel this way as well. We pass one test only to see another test fail. We type quickly and powerfully until the test is green. We run the suite and something else has broken. It begins to feel like a maddening experience where the test passes and failures blur together yet we can clearly distinguish a circular pattern. This is a smell and the smell should make us nauseous. It smells of needless complexity. We are hacking our way blindly through the code trying to pin down the one correct answer to the problem while leaving a tightly-coupled, complex, and unreadable wake. Code does not have to be this way. When we build our application out from the top down and defer hard decisions we can let the complexity come to us in small, manageable waves.

The top down development approach tells us that we can, and should, start at the top level of our application. We should express our intent in high level algorithms and relationships that are simple and easy to understand. If there is a hard high level decision to be made, we ought to make it now independent of the lower levels of implementation. We simply write low level interactions as if they existed in the system so that we are uninterrupted in the task at hand. When we feel comfortable with our design we move one level deeper. We view the current state and make decisions, once again, as if lower level implementation were available. Eventually, we reach the bottom and the implementation is agnostic to the higher levels. It simply does what it is told and we have avoided complex coupling between layers of abstraction.

Part of getting the top down approach right is to simply write what we say. If we have employees in our system and we want to aggregate the names of all of our employees we simply do that, exactly as if we were stating it verbally.
What is all_employees? It might not even be in the system! Right, it might not be, but we need it to get the names of all employees. Getting all the employees from some data store or central location is a detail that is outside of this algorithm's scope. Once we have finished with this algorithm we can move downward and collect all employees, again just translating spoken features into code. Coupling the idea of doing what you say with top down development is a powerful remedy to the typical complexity of systems.

If we have taken these two approaches to system building we have set ourselves up for the benefits of decision deferment. Why decide something before it is absolutely essential? After all, if we make the decision later we will be better informed about our system since more of the system will be built out. As the system falls into place we begin to find patterns. These patterns drive abstractions. As our system takes shape these abstractions allow additions to become simpler and better managed. Deferring decisions allow decisions to seemingly fall into place or, at the very least, help pin down a solution set to solve our problem. For example, I was recently pairing on a 'like' algorithm for a music playing application. If users like a song it would play more often. My pair and I spent a long time figuring out important ratios, playing with variables to establish good weights, ect. In the end, we sat down to code and realized that what we really needed was a workflow for the user to like a song and for the system to record and aggregate the user's likes. Thankfully, we did not dive straight into hacking up some weighting algorithm for an attribute that we did not have a familiarity with. Instead, we build the like aggregation first and moved on from there.

When we start to fall into the trap of chasing a solution we leave a trail of nested nasty garbage in our code. This garbage is a smell and we should avoid the chore of cleaning it up later if at all possible. Top down development and writing what we say allow us to realize the opportunities of decision deferment. Remember next time you feel as if you are on spinning ride of failure repetition to step back and avoid needless complexity.

Day In Review

I was able to get the backbone storyboard into place today. It was a lengthy process of trial and error and tuning what was already in existence. I plan on refactoring some of the rougher edges in order to make the code more robust and readable. I also plan on taking on the merge to get this branch back into master fairly soon. From what I understand there are many conflicts to fix since this branch if fairly old and ~50 commits behind master. I've also been rough drafting and thinking about my upcoming technical blog post.

Saturday, February 18, 2012

Day In Review

I spent the morning moving portions of the storyboard over to backbone. Some of the behavioral responsibilities will be staying in the their original storyboard classes for the time being since they are scattered responsibilities without a home in the new structure. Moving forward I hope to find the right way to express these behaviors within the confines of the backbone architecture.

In the afternoon I was able to pair on the jukebox project with my waza time. I'm hoping to implement a 'like' (that term may change) feature. When users like a song it will change the weight used when selecting songs. Currently, there are some fairness rules about picking songs between different users, but I plan on disregarding that in favor of democratic song selection. The feature has two tasks. First, we need to set up the app to collect likes per song, limiting a single song like to one per user. Second, we need to develop an algorithm for weighting the random song selection.

When thinking about the weighting algorithm it became apparent that the ratio of number of users in the system to number of songs in the system should invoke some kind of sensitivity on the weights. I also liked the idea of using that ratio in tandem with a logarithmic choice selection, meaning that there is a quick rise in song frequency when first receiving likes, but then the frequency starts to level off with the accumulation of likes. Fortunately, I've deferred the algorithm's implementation for the time being because the system first needs to have a like aggregation workflow in place.

Friday, February 17, 2012

Day In Review

The Backbonification (the term of choice at 8th Light) of the Artisan Story Board has been interesting so far. I'm enjoying the opportunity to approach the work I have to do in a iterative modular fashion. As I finish tasks I can start to put them into place independently of the work that is still yet to be done. This is afforded to me by the structure of Backbone.js, which I'm finding to be a really nice library to work with.

I'm also enjoying the opportunity to demystify the storyboard, to an extent. Backbone allows the responsibilities to fall into place naturally and it works out nicely. On one occasion did I have a little trouble finding the correct way to move forward. I have lots of redundancy in two views, namely a view of a story in the backlog and a view of a story on the story board. I first approached the problem with inheritance, but I didn't like my solution. The base class was useless, it was a view without a render method. I then moved away from inheritance and tried to use the Builder pattern to construct the common view and allow each individual view to work from there. I then backed away form this idea because it didn't feel natural. Ultimately, I went back to inheritance but forced the base class to implement the render algorithm but defer certain data plugins to the child classes. I then forced the base class to be abstract (not truly possible in javascript) by forcing an exception to be thrown when instantiating the base class. I like the new design and I feel as though it's a good pattern moving forward.

Wednesday, February 15, 2012

Day In Review

I completed the iteration dedicated to my HTTP Server. During my iteration meeting Paul and I discussed the one logical error in the architecture. Instead of relying on one Client Implementation to interface with an application, we should interface with one to many Clients and allow the routes file to specify which client a verb + path route will hit. This made sense because rails works in very much the same way (Rails, however, has a little more leeway since duck typing allows routes to hit an arbitrary method on a controller whereas I am using the Command Pattern to interact with a Client interface).

This makes the wiring a little trickier because instead of the user specifying the single Client to use, the server must now find all Client Implementations within the Application's jar file. This is, however, possible since java reflection offers the ability to ask for base classes. I plan on revisiting this in my free time, hopefully this weekend.

Moving forward I am going to be working on moving the Artisan Story Board to Backbone.

Monday, February 13, 2012

Day In Review

I have a 1.0 version of my HTTP server available for use. You can find it and download the .jar on my github. I feel fairly pleased with how the server has turned out. I was able to put together a simple Client app to pass all of the cob_spec acceptance tests. I also benchmarked my server and was able to handle 10,000 requests in ~16 seconds. I hope to improve on this performance moving forward by tweaking a couple of implementations. I also hope to introduce relative paths for the command line arguments because the absolute paths are a bit ridiculous to pass in to start the server.

I was able to use reflection to reference the Client Implementation. This turned out to be simpler than I had imagined, it involved loading a jar and then referencing the class in its specific package. I ran into one minor bug with the data passed in the PUT and POST types. I fixed my 'Analyst' to handle the edge cases and now feel confident after passing the cob_spec.

Sunday, February 12, 2012

Acceptance Testing For Student Projects

As a college student majoring in Computer Science one can expect multiple short to medium length projects per class. Almost all (almost being 99.99%) students will, at some point, feel as though they have finished an assignment only to find that during their last sprint towards completion they had broken a feature. We all know the drill, we think we have finished and start checking our input and output against what it required. Then we notice that some tiny detail is now broken. When this would happen to me, and it almost always did, I would have a feeling of despair sink into my stomach. If it was the night before the project was due I would put on a pot of coffee and prepare for the proverbial all-nighter. This does not have to be the way college projects go. In fact it should not be and that is because all programmers have acceptance testing as their disposal.

Acceptance tests, "act to verify that the system is behaving as the customers have specified" (1). What does this mean to a student, since they have no customers? Think of it this way: as a student you must approach assignments as if it were your job, as if your financial stability and your reputation were at stake. Your teacher is your customer. You want to make sure that your customer is happy and for that reason you want acceptance tests to assure their happiness. In fact, how can you assure yourself the highest grade possible other than by proving that you deserve the highest grade possible? How can you assure your customers' happiness other than proving that their desires are fulfilled?

Of course you want the highest grade possible and of course you want to avoid all the terrible emotions and time that go in to fixing a project that had been previously working. The first step towards school project zen is to pick an acceptance testing framework. Then, begin to translate the requirements into a high level system requirement. Think to yourself, how will the user use the system? Let's look at a quick example from when I was in school. One of my projects involved writing a Chained Hash Map implementation. We needed to be able to add, delete, and lookup within the Hash Map. Excellent. Here is an example of high level design.


I write these first since these are the features I need to have implemented. As I code I am actively working towards completing these high level goals one step at a time. In fact, since this is a chained hashmap let's write one more test because we thought ahead.


I start at the top and work my way down. Notice how these steps are agnostic to implementation. The only way they point is towards succeeding in satisfying the client, not towards low level implementation. As I progress I write the steps in to fulfill the scenarios I have laid out. Best of all, as I move from one feature to the next I can assure that I have not broken previously working functionality. When I'm done I have assured my project's completion and my professors' (customers') happiness. Be sure to read up on your framework because each specifies their scenarios differently. Also, each framework glues their scenarios to their test code differently.

The beauty of acceptance testing is the quick warnings it can provide. When you are working and come to a stopping point just run your acceptance tests. If you have a failure see if it can be resolved quickly. If it can not, at least you now know (ahead of time) that the path you were heading down is the wrong path for completion of your project. Acceptance tests offer a red light to bad coding paths and a green light to good paths and ultimately project completion. Next time you receive a programming project take the time to write up acceptance tests, your future self will be thankful that you have.

1) Martin, Robert C. Agile Software Development. New Jersey: Pearson Education, 2003. Pg 13.

Thursday, February 9, 2012

Day In Review

Today, my server and tic-tac-toe game went their separate ways. My Server received the name HT(TPS) Report, which is a reference to the most excellent movie, Office Space. My tic-tac-toe game is now a single jar and the TicTacToeClient, which implements the Client interface of my server, now lives inside that jar. That means that applications are able to use my server as a dependency and register themselves for the server.

I also have text file routes now. The user can specify a routes file and pass that in on server initialization. The end of the day involved me getting my head around the next feature. I will be allowing the user to specify a jar with a package that contains their Client Implementations and pass that, as a command line parameter, to the Server on initialization. I will then be using reflection to get the Class and make instances of that Class when necessary.

I also added command line arguments with JArgs. I highly recommend JArgs, it's lightweight and very effective.

Day In Review

I feel as though the design of my server is starting to pay off. After I did some very useful refactorings to my server it started to feel flexible and easily extensible. I was able to add a DB Persistor fairly easily. Implementing the DB Persistor was another story.

I first thought that I wanted to serialize my objects into strings to avoid any type problems in the database. There is an awesome library named Gson which serializes java objects into JSON. The user specifies the types expected upon deserialization and it works like a charm. There was one rough spot, however, and that is with interfaces. If you serialize an object nested with references to interfaces then deserialization becomes hell. The deserializer does not know how to recreate the concrete implementation of the interface since the deserialization type only contains nested interfaces.

In the end, I avoided that mess by making my database hold references to Object. I then type-casted when pulling references back out and it worked no problem.

Tuesday, February 7, 2012

Day In Review

In the morning I continue working on my HTTP server, working towards finishing the large story of my iteration. I had everything working except for one little pesky bug which in turn stopped me from completing my iteration. Of course, after my iteration I found the problem and had everything working. When I was reading the input stream of an incoming HTTP packet I had an if statement that asked if the input stream was ready to be read. This would fail ~1/100 times which allowed it to go unnoticed for a long period of time (and allow me to pass my acceptance test). In the end I added a while loop that sits and waits until the input stream is ready to be read.

In the afternoon I had my IPM with Paul. I made my stories smaller in scope and in points in order to avoid missing an iteration again. I will also be narrowing the scope of my recent blog post 'Students, Take Notice.' The new post will focus on acceptance testing for small school projects.

Monday, February 6, 2012

(Retro) Day In Review

On Friday I spent the morning working on my HTTP Server. After 8th Light University, we had a discussion about what the University events should look like moving forward. I felt like it was a productive meeting to discuss the changing style of the event (as more and more attendees are present). We also had a quick meeting for Artisan.

Afterwards, I took a look at the Jukebox code base in an attempt to try to implement a new feature. On the to-do list was a feature request for a hammertime to pause the system upon completion. The idea being that if a standup hammertime plays then the users will not want jukebox to continue afterwards to it can be quiet for standup. I plan to revisit the little work I had on this when I have some free time.

Students, Take Note

After graduating college and becoming exposed to the large array of ideas in industry, I started to reflect on what it would have meant to have had exposure to these ideas while in school. I wonder how much time and frustration I could have avoided with three specific disciplines I have since only begun to understand. In all fairness, I did not avoid them, however; I was not actively seeking them out. I suspect most Computer Science related majors are in the same position. These ideas are not necessary to graduate. They are not even necessary to do well, school is not designed in this way. They are, however, necessary for good design and programmer sanity. What ideas am I speaking of? Acceptance Testing, Test Driven-Design and The Single Responsibility Principle.

I put acceptance testing first in the list for a reason. Acceptance testing is a little bit harder to grasp than test-driven design and this is probably due to acceptance testing being at a higher level. Acceptance tests, "act to verify that the system is behaving as the customers have specified" (1). What does this mean to a student, since they have no customers? Think of it this way: as a student, you must approach assignments as if it were your job, as if your financial stability and your reputation were at stake. Your teacher is your customer. You want to make sure your customer is happy and for that reason you want acceptance tests to assure their happiness. In fact, how can you assure yourself the highest grade possible other than by proving that you deserve the highest grade possible? How can you assure your customers' happiness other than proving that their desires are fulfilled?

Of course you are saying, 'well duh I want to get the highest grade possible.' Great. Pick an acceptance testing framework suitable to your situation. Then began to translate the requirements into a high level system requirement. Think to yourself, how will the user use the system? Let's look at a quick example from when I was in school. One of my projects involved writing a Chained Hash Map implementation. We needed to be able to add, delete, and lookup within the Hash Map. Excellent. Here is an example of high level design.



I write these first since these are the features I need to have implemented. As I develop I am actively working towards completing these high level goals one step at a time. In fact, since this is a chained hashmap let's write one more test because we thought ahead



I start at the top and work my way down. Notice how these steps are agnostic to implementation. The only way they point is towards succeeding in satisfying the client, not towards low level implementation. As I progress I write the steps in to fulfill the scenarios I have laid out. Best of all, as I move from one feature to the next I can assure that I have not broken previously working functionality. When I'm done I have assured my project's completion and my professors' (customers') happiness. Be sure to read up on your framework because each specifies their scenarios differently. Also, each framework glues their scenarios to their test code differently as well.

Throughout school I can remember having the feeling of being finished with an assignment only to begin my manual testing before submission. I would click around, type some input, and read back the output and assure that it was correct. Then, I would have this feeling of horror in my stomach when a previous feature now failed. My stress level would instantly rise and I would spend an inordinate amount of time in a loop of fixing a feature to find out, with manual testing again, that I had broken my program somewhere else. Once one gets into the habit of acceptance testing that loop changes drastically, and for the better! The stress of breaking some far part of the system is mitigated by instantaneous feedback when it happens. We instantly know if some implementation of a feature was done incorrectly because our test will tell us.

The next idea I wish I had known about in college was Test-Driven Development (TDD). This is testing at a lower level than acceptance testing. TDD is the process of testing each individual modular piece of code in the system. In fact, it's not just testing the code, but testing the code before it is written. Do not worry, it's not as bizarre as it sounds. When I begin to write a new class, or a new method I first write a failing unit test. I specify, somewhat like an acceptance test's scenario, what the module should do. The unit test is agnostic to implementation, it just checks to make sure that the implementation works. I watch the test fail. Then, I implement. If my first try does not work exactly as I intended, I immediately receive feedback on what went wrong. Why is this a good idea? Where acceptance testing is to ensure the happiness of your client, test driven development is to ensure the happiness of yourself and your group. If my test passes then I am assured that this modular piece of code conforms to my automated test's standards. Writing a test first forces the implementation that passes the test to be flexible enough to run independently in a series of unit tests. This flexibility goes a long way. In fact, when I write unit tests I don't expect the flexibility to pay off initially. I expect the flexibility to pay off over time and in ways I can not yet imagine.

I might be speaking for myself, but I can remember writing monster functions as a student. When I changed one little line in a twenty line function, it would break for one case and not all of the other cases. The changes always seemed simple enough, but they never were. If I had written tests I would immediately know what had broken and where. A suite of unit tests is excellent at pinpointing where the code changes went wrong. Couple unit tests with acceptance tests, and software's reliability and flexibility increase, and as reliability and flexibility increase, your stress level goes down since the development process starts to contain less bugs and less surprises.

The Single Responsibility Principle (SRP) should be thought of as going hand in hand with TDD. SRP is part of the SOLID principles, but I want to talk about this one principle in isolation. As a student one problem I had with code, that I can now reflect on, was large unwieldily methods and classes with far too many responsibilities. It is important to let TDD drive SRP in your code. When you write a new test, ask yourself, is this testing one responsibility? If not, how can I rethink my design to do so? The flexibility afforded by TDD is multiplied by SRP. When we divide responsibilities fairly, one per module, we do not hide side effects in our code. We explicitly handle our use cases purposefully. When we do not hide side effects we do not have surprises lying in wait. When we use TDD we can point to a single responsibility that has broken down in our system and know with confidence that the broken test is only doing one thing, it only has one responsibility.

SRP is another tenet of flexible design. When we break apart our responsibilities we allow ourselves to change any one use case rather easily. We isolate the responsibility to change. We find our test for the responsibility and change the assertions to match our new use case. We then watch as the test fails. We find the corresponding module to change. We know that the use case is safe to change because it is isolated. We pass our test. We run our unit test suite and watch as they pass. We run our acceptance test suite and watch as they pass. All the while we can think about the surprises and stress we had avoided.

1) Martin, Robert C. Agile Software Development. New Jersey: Pearson Education, 2003. Pg 13.

Day In Review

The day started with the bi-weekly Hosemonster IPM. Thankfully, the iteration is slow enough and laid back enough for us apprentices to take part and have the opportunity to learn. It's nice to both take time to learn about process, good code, and simultaneously deal with a live client. I enjoy the feedback of an IPM because the direction of the application comes forward while at the same time the client and team work out the details of the big picture. It's nice to see both of those perspectives come together at one time.

After the IPM I attacked a bug in the PDF generation for Work Orders. When the app split into two apps (Hosemonster and Limelight-Hosemonster-UI) there was a weird break in the templating for the PDF generator. I felt accomplished when I tracked the bug down because I was able to restore the previous functionality and remove a superflous function from the namespace. The solution felt cleaner and I was proud of that.

I also paired with Wei Lee on a big of refactoring in preparation for plugging in the graphs we had been working on for quite some time.

Friday, February 3, 2012

Day In Review

My HTTP Server's refactoring has been going fairly well. Currently, the Packet and PacketParser classes are messy and I am slowly refactoring their behavior. I'm also pulling the responsibility of generating the return string of a packet out from the Packet class. I plan to let the Packet be only a hash of HTTP packet attributes. I will then use the presenter pattern and make a Packet Presenter to generate the outgoing packet.

I've also run into some trouble. Before, I kept my in memory data store on the top level of the server. Now, I need to find a different way to persist the state of Tic-Tac-Toe games since the top level is agnostic to the implementation and can no longer hold the data store. I've been thinking over the possibilities, but it's hard to choose something to refactor towards. Once I have my Packet workflow refactored I will be tackling this decision.

Wednesday, February 1, 2012

Day In Review

I completed my second apprenticeship iteration today and started a new one. This week I will be contributing three points to hosemonster and the remaining time will be spent refactoring my HTTP server from last summer. Currently, the server is integrated with my Tic-Tac-Toe game. I had thought that it was heavily intertwined, but I'm finding that to not be the case. I'm missing a few key server abstractions, but the refactoring is going fairly smoothly after one day.

One atrocity of the code base was the status of my test. I ran the tests when I first pulled the project and they froze. Why did they freeze? They were waiting for STDIN input to move forward with a specific tests. This was no good, so I started handrolling mocks and fixing the state of my tests. After I felt comfortable with the tests in place I moved towards refactoring.

I've been thinking about a way of metaphorically naming my server components at a high level. One idea I like is to use business metaphors, as if the server were an office. The forward facing ServerSocket piece would be the 'Receptionist.' The receptionist would pass the socket off to a 'Middleman,' which I was originally calling the Dispatcher. The middleman would then pass off to the 'CEO.' The CEO metaphor probably won't stick, I don't like it. The CEO is currently the high level thought relating to the interface to the business logic that is ignorant of the framework. The naming scheme is a work in progress.