Monday, March 19, 2012

The Interface Segregation Principle in Dynamically Typed Languages

When I first heard that 'duck-typing is the interface' it only meant one thing to me; it meant that I could not explicitly use an interface. The ramifications of this were not immediately apparent, but what did that matter? I had duck-typing! Later on I started to understand what it meant to have implicit interfaces. Only after did I have a few 'fix this forward facing class and watch everything break' refactorings did it start to sink in. I needed to really treat certain classes and modules differently than the rest of my code. I needed to create pieces that I could depend on. I needed pieces that were static because they encapsulated ideas that should not be changing often. I wanted to switch the implementation at will and the only way I was going to accomplish this was to really understand that duck-typing is the interface.
When I say create a class to depend on it means a couple of things. First, it means that the class is for the client and not for the implementer. This if often said of interfaces in a statically typed language so of course it holds true for an implicit interface. An example would be any gem worth using (pick your favorite). They provide a forward facing interace that is not meant to change rapidly and does not change for weak reasons. Gems are to be used by clients and are meant to be predictable with each release. Imagine if you had to rewrite your code with each gem version because the implementer had a new idea for code arrangement. You would not do it, you would stick with the version you were previously using, missing out on the new tweaks and features. For this reason the interface we provide in dynamically typed languages needs to be static and only change for a very well thought out reason. This way the client can have a high confidence in their expectations of the code. Remember that pulling a method out from under a client in a dynamically types language causes run time exceptions (although, they ought to be testing).

What if the implicit interface being provided is staring to feel bloated and too big. It started out as one coherent idea but is now many fragmented ideas? Then, it is time to break it up into smaller interfaces. We do not want to sacrifice the dependability, but we also do want to bring our classes and modules back to a coherent state. It is then time to refactor to smaller classes, however, it is important to keep the client of the interfaces in mind. It is entirely possible for classes to 'implement' two or more interfaces keeping in mind that our implicit interfaces are forward facing for the client's use. Moving forward, however, it is important to remember that we have two or more ideas being implemented in this module and that these ideas do not always need to be implemented together.

When we provide a public API to our code we allow ourselves to change the implementation at will. What if, for example, we provide a class that talks to an external service. Behind this interface we manipulate the data we receive from the service and return only what is necessary to the client. We expose this to our client through an implicit interface. Then, one day, the backend service changes completely. We can still receive and use the same data but how we retrieve this data is completely different. So, of course, the implementation changes. But, our client of our interface never has to know about the implementation changing. We can pass a completely different class around, but since it conforms to the interface the client never knows, and frankly, never needs to. The client never noticed a disruption in service because the client's expectations of the interface were always met.

Uncle Bob writes about a copy program when discussing the Dependency Inversion Principle. I think it is a great read and it really helped me understand what it meant to invert dependencies when I was still struggling with the idea. What I wanted to highlight is not really the dependency inversion, but the interface segregation that also takes place. I'll rewrite the system in ruby to show what I mean.


When we look at this it looks trivial. Why not just call gets and puts directly instead of putting them in a class? Well, let's see what this looks like when the implementations change.


We want the client to use the file read and write instead of the standard IO read and write that we were using before. Notice that the client's implementation (the copy method) did not have to change at all. This is only accomplishable because we hid both implementations behind the Reader and Writer interfaces respectively. This is the power of thinking with interfaces in a dynamically types language.

Although it is not explicitly declared, it is important to remember the Interface Segregation Principle. With ISP we can create dependable interfaces for our clients. We can also switch implementations at will without having to wait and see what breaks. These are powerful tools to use and they ought to use them in our dynamic languages. Even if interfaces are not explicit we still have access to the idea.

Wednesday, March 7, 2012

Day In Review

We started the morning off with an iteration planning meeting. I was able to demo the nearly complete workflow which was good. After the meeting I was able to pin down the workflow, with all acceptance tests, integration tests, regression tests, and unit tests for the feature around the payments. I was able to delineate between system errors (exceptions) and user input errors (sad paths that lead to error messages) which is good because it allows the user interface to provide a feedback loop for the user while blowing up loudly if the system does not work as expected.

I was also able to pair with Brian today and do some really nice refactorings. The controller action for payment went from 19 lines to 7 lines, which was great. We were also able to build a validations module for the input in order to move some of that responsibility away from the controller. I started working with the UI and plan on continuing UI work and hopefully be finished before lunch tomorrow.

Tuesday, March 6, 2012

Day In Review

Today I put a workflow in place to go end to end and back with a payment. It felt good since this story was so large and took so long to accomplish. I am not, however, finished and I still have a good amount of refactoring to complete. There are a few abstractions left to make this code feel nice and clean. Specifically, my controller actions feels very procedural and it should be delegating a lot of what it does to other modules. Once I refactor and tighten up the workflow (ie: make the UI feel less clunky) then I will be able to call this done.

I feel happy with my code so far since it put in place modules that will be able to support all kinds of future (and unknown) transactions. That means that the development time for future types of payments should be shortened. We have our IPM tomorrow morning and I plan on demoing what I have available at this time.

Monday, March 5, 2012

Day In Review

This morning was spent working with pulling a gem from a git repository and having it work within the current project. Specifically, the gem was the payment API wrapper that I had been working on recently. We need to pull this gem from a private git repository since the code is not intended to be open sourced. Once I was able to have the gem pulled, resolving its dependencies, and working in the current environment I started writing a workflow to accomodate the gem.

When a user submits their banking information it will hit a payments controller. This controller will pass off to a payments interactor that will send a call to the payment API. Once it receives a response it will create a Transaction object to encapsulate all of the important payment data. This data will then be persisted for future use and reference.

Sunday, March 4, 2012

Hot Swapping In Java

When one thinks of Java they might think of monolithic systems with components that are compiled together and linked against each other. A typical class might import some element from other packages (include packages in jars) but all of those files, packages, and jars, are known at compilation time. This is largely true, however, it is not set in stone. The Java Classloader is the magic responsible for pulling .class files into the current virtual machine's environment for use. By overriding the Classloader it is possible to change the available .class files at run time.

Overriding the Classloader is non-trivial, but fear not because there are open source libraries to accomplish this hotness for us. The one I will be using today can be found here. I'll present the code first and then we will go over a few points.


The interface Message and its two implementers are simple enough. It allows use to call a single method on each instance we load dynamically.

The main method will sit in a loop and wait for stdin input. When the main method is first invoked it takes the base path containing .java and .class files. This base path can even be dynamically changed with the library we are using, but I will avoid that for now. The main method will take input, inspect it, and call the corresponding method in Example.

Example is where we use the library to hot swap our Message Implementation. We hold one Class object named impl. Java Class objects hold information on a compiled class and allow us to make new instances of that compiled class. The Class object can hold any type of compiled Java class which is useful for the type of reflection we are going to do. The method changeImpl will look around from the basepath to try and find a match and will then change impl to hold the Class data for what it has found. The method speak uses the Class object to make a new instance and call the method Speak which the class has by implementing Message.

When we run this application we can change between HelloWorld and HelloHotswap and call speak accordingly. The code provided has no knowledge of HelloWorld and HelloHotswap, yet it can make use of it! In fact, if we wrote another Message implementation, compiled it and put it in the basepath's directory then we could load our new implementation without restarting the application. When we must have zero downtime this is a very powerful tool in the java tool belt.

Many open source libraries for dynamic loading exist and they can be useful for situations where we want to add functionality without shutting down our application. The ClassLoader and its underlying connection to the virtual machine allow these libraries to exist. By over riding the ClassLoader we can control the virtual machine's access to .class files. With this type of control we open the door to hot swapping at run time, which useful and a great dynamic-oriented addition to Java.

(Retro) Day In Review

Friday morning I continued on working ACH payments. The gem is a separate entity so at this point it's a matter of setting the up for the workflow that will lead up to the use of the gem. I plan on continuing this into next week and then adding functionality to the gem as the customer requests it.

For waza in the afternoon I worked on jukebox. I had started the like aggregation needed for the song like weighting algorithm that I hope to implement sometime in the future. Unfortunately, I stalled on jukebox work because I ran into problems with Protocols and Defreords.

Wednesday, February 29, 2012

Day In Review

The Iteration Planning Meeting for the project I am joining happened this morning. It served as a good introduction for me to the app that I will be working on shortly. I was able to see many of the workflows available and familiarize myself with the features that are in the pipeline.

After the iteration meeting I continued working on my payment API wrapper. I finished the basic functionality needed to plug into the app, that is to say it accepts check payments. I plan on starting to include the API wrapper tomorrow.

Tuesday, February 28, 2012

Day In Review

I continued wrapping the payment processing API today. I ran into come issues with my integration test post-refactoring which lead me down a small rabbit-hole. Thankfully, I was able to pull myself back out before I lost too much sanity. Afterwards, it felt good to end the day totally green (specifically with unit and integration tests). Tomorrow, there is an iteration meeting and I am excited to introduce myself to the team by showing some of the functionality I have been working on.

Since the team uses JRuby, I've been running my test suite against JRuby before commits. I would develop in JRuby but it feels too slow for quick feedback loops (MRI, however, is blazingly fast). I feel as though clojure can evaluate at a quicker pace than JRuby and I'm curious to benchmark this to see if my feeling is true. If it is true it might be worthwhile to investigate the differences and see how they utilize to the JVM (I've been interested in the JVM lately for a variety of reasons).

Monday, February 27, 2012

Ruby's Functional Programming

Ruby is a fully object oriented language and is, in fact, so dedicated to objects that every entity in the language is an object. After hearing that statement it might seem a little weird that ruby also has functional support. Ruby's functional aspects are powerful and complete which makes it worthwhile to learn. We can invoke operations such as reduce and filter with the use of code blocks. We can also pass functions around directly with the use of lambda and Proc objects. In this post I'll show you some of the more powerful tricks, show their potential pitfalls, and document the experience with code to play with.

Each method call can be given an additional code block. If you don't believe me then put a block on the end of every method call that currently does not have one and watch as (almost) everything still works as intended (please don't really do that). I'll introduce code to show this, but first let's see a straight forward imaginary workflow:

Now get_name just returns the name attribute of an object, as you might guess. If we wanted to use this to get the name of an employee we would first find the employee and then pass it in here as a parameter as we have done above. Let's show the same workflow, but this time let's allow the code block to give use the name:

Cool right? We can build any employee interaction we want off of get_employee using code blocks. This is an imaginary case, however, and code blocks aren't always the best option so use them wisely.

Code blocks are a part of some of the standard library's methods. It allows us to make use of functional ideas in ruby code. For example, let's look at an inject that sums all of the elements given multiplied by two.

These are powerful expressions because the intermediary steps (i.e. the summations and a single element's multiplication by two) are stateless. We simply put data into the inject and a single answer comes out with no side effects. Other such functional actions include (but are not limited to) reduce, collect, and reject. Ruby's Enumerable has lots of functional methods.

The last functional item I want to share is the use of closures. In Ruby we can make use of lambda, Proc, and method(:name) to create closures. They all appear to be very similar, but have subtle differences. For the sake of learning we will ignore the subtle differences and use Proc to explain the concept. Procs are objects that encapsulate executable code. With a Proc we can bundle up some code and pass a Proc object around until we are ready to call it. For a simple example let's look at the following:

This should feel very similar to the code blocks we had discussed earlier. This is because code blocks are a type of closure! Think of Proc's as a code block that can be held for later use. Let's explore closures a little more:

What happened here? We invoked gen_bar in the Example::Bar object and therefore Bar.new should invoke a new Example::Bar, right? Wrong! Procs are always evaluated in their declared scope. That means that in this case the Proc was executed in the context of module Foo even though it was called in module Example. This is something to keep in mind as closures are passed between classes and modules.

Functional concepts in Ruby can make coding easier, cleaner, and more expressive. It's important to understand the concepts in order to use them correctly when a problem being faced could use a functional solution.

Day In Review

I began implementing an API wrapper around a payment processing and verification service. The workflow involved building up the required xml, shipping it off, and then making assertions based upon the return values. The code can get overly complicated if one is not careful to separate concerns and build out modular pieces. The code itself is meant to abstract the API away from the main app in order to assure loose coupling.

I generated integrations tests when building. Thankfully, the service offers a sandbox server to test against. This meant that as I built the functionality I could fill in steps for the integration test to follow up to and including the interaction with the sandbox server.

Sunday, February 26, 2012

(Retro) Day In Review

I've been without internet this weekend since I moved to a new apartment and am yet to have the cable guy visit. Before I start I need to give a quick shoutout to the Starbucks on Broadway and Sheridan for their wifi :) .

Friday morning I put some more time into the new internal 8th Light application I have started. The start of this project is exciting because there are many new idea that run counter to the canonical rails development. I've enjoying playing with some ideas and letting them fall into place with a red/green building and refactor cycle. Some of the abstractions took two or three attempts to feel right, but I am starting to finish up some of the beginning features and pass acceptance tests. Moving forward I will be finishing the first iteration's features and take a little bit of time to put a polish on the UI so it is not bland html.

Friday evening I had the opportunity to visit the University of Chicago for a hack night. It was a great experience because myself and two other 8th Lighters were able to share ideas with the students. I enjoy any opportunity where both parties are excited to share ideas two-ways and this was one such opportunity.

Thursday, February 23, 2012

Day In Review

I continued working on the interactions between columns on the story board. There are many server and client side modules that all serve these interactions so the change I want to make alongside the decoupling I am doing as I go is getting tough. Fortunately, I was able to leverage the power or prototypes to add metadata to objects that were subclasses of backbone objects. This means that I can retain the inheritance and those benefits with attached metadata that lives outside of that child-parent object structure.

I also paired with Paul on an autocomplete feature. We ran into some walls in development, but I believe we were on the right track. One takeaway I had was to select your dependencies carefully. If you include a dependency that itself is a collection of dependencies don't then include a single dependency that clobbers what exists in the collection. It's important to be cognizant of the ramifications of included dependencies.

Wednesday, February 22, 2012

Day In Review

Interactions between story board columns is rough and I am beginning to think that the architecture is going to need to change for the story board backbone views. Instead of having 5 story collections it will all be condensed into one collection that each of the five column views shares. Then, we can attach some meta-data (that will not synchronize with the server) to each story and let the column views use the meta-data to decide which stories to accept and reject when rendering. Although I liked the five ignorant collections with one per view it is not useful for multi-view interaction and every spiked solution I have attempted so far has felt very hacky.

I also planned the beginning iteration for Abbacus today. I spoke with craftsmen and apprentices about the architecture of the application and what they thought would be good practice when starting.

Monday, February 20, 2012

Decision Deferment

We have all seen the carnival ride where the riders are in a circle and the ride spins them around incessantly. The riders see the same distorted, blurred images flash past them over and over again. By the end of the ride the images are familiar, even in their blurred state. Eventually, the experience takes its toll and the riders begin to feel nauseous. Some riders feel this discomfort more than others, but all riders are aware of the feeling. Code can feel this way as well. We pass one test only to see another test fail. We type quickly and powerfully until the test is green. We run the suite and something else has broken. It begins to feel like a maddening experience where the test passes and failures blur together yet we can clearly distinguish a circular pattern. This is a smell and the smell should make us nauseous. It smells of needless complexity. We are hacking our way blindly through the code trying to pin down the one correct answer to the problem while leaving a tightly-coupled, complex, and unreadable wake. Code does not have to be this way. When we build our application out from the top down and defer hard decisions we can let the complexity come to us in small, manageable waves.

The top down development approach tells us that we can, and should, start at the top level of our application. We should express our intent in high level algorithms and relationships that are simple and easy to understand. If there is a hard high level decision to be made, we ought to make it now independent of the lower levels of implementation. We simply write low level interactions as if they existed in the system so that we are uninterrupted in the task at hand. When we feel comfortable with our design we move one level deeper. We view the current state and make decisions, once again, as if lower level implementation were available. Eventually, we reach the bottom and the implementation is agnostic to the higher levels. It simply does what it is told and we have avoided complex coupling between layers of abstraction.

Part of getting the top down approach right is to simply write what we say. If we have employees in our system and we want to aggregate the names of all of our employees we simply do that, exactly as if we were stating it verbally.
What is all_employees? It might not even be in the system! Right, it might not be, but we need it to get the names of all employees. Getting all the employees from some data store or central location is a detail that is outside of this algorithm's scope. Once we have finished with this algorithm we can move downward and collect all employees, again just translating spoken features into code. Coupling the idea of doing what you say with top down development is a powerful remedy to the typical complexity of systems.

If we have taken these two approaches to system building we have set ourselves up for the benefits of decision deferment. Why decide something before it is absolutely essential? After all, if we make the decision later we will be better informed about our system since more of the system will be built out. As the system falls into place we begin to find patterns. These patterns drive abstractions. As our system takes shape these abstractions allow additions to become simpler and better managed. Deferring decisions allow decisions to seemingly fall into place or, at the very least, help pin down a solution set to solve our problem. For example, I was recently pairing on a 'like' algorithm for a music playing application. If users like a song it would play more often. My pair and I spent a long time figuring out important ratios, playing with variables to establish good weights, ect. In the end, we sat down to code and realized that what we really needed was a workflow for the user to like a song and for the system to record and aggregate the user's likes. Thankfully, we did not dive straight into hacking up some weighting algorithm for an attribute that we did not have a familiarity with. Instead, we build the like aggregation first and moved on from there.

When we start to fall into the trap of chasing a solution we leave a trail of nested nasty garbage in our code. This garbage is a smell and we should avoid the chore of cleaning it up later if at all possible. Top down development and writing what we say allow us to realize the opportunities of decision deferment. Remember next time you feel as if you are on spinning ride of failure repetition to step back and avoid needless complexity.

Day In Review

I was able to get the backbone storyboard into place today. It was a lengthy process of trial and error and tuning what was already in existence. I plan on refactoring some of the rougher edges in order to make the code more robust and readable. I also plan on taking on the merge to get this branch back into master fairly soon. From what I understand there are many conflicts to fix since this branch if fairly old and ~50 commits behind master. I've also been rough drafting and thinking about my upcoming technical blog post.

Saturday, February 18, 2012

Day In Review

I spent the morning moving portions of the storyboard over to backbone. Some of the behavioral responsibilities will be staying in the their original storyboard classes for the time being since they are scattered responsibilities without a home in the new structure. Moving forward I hope to find the right way to express these behaviors within the confines of the backbone architecture.

In the afternoon I was able to pair on the jukebox project with my waza time. I'm hoping to implement a 'like' (that term may change) feature. When users like a song it will change the weight used when selecting songs. Currently, there are some fairness rules about picking songs between different users, but I plan on disregarding that in favor of democratic song selection. The feature has two tasks. First, we need to set up the app to collect likes per song, limiting a single song like to one per user. Second, we need to develop an algorithm for weighting the random song selection.

When thinking about the weighting algorithm it became apparent that the ratio of number of users in the system to number of songs in the system should invoke some kind of sensitivity on the weights. I also liked the idea of using that ratio in tandem with a logarithmic choice selection, meaning that there is a quick rise in song frequency when first receiving likes, but then the frequency starts to level off with the accumulation of likes. Fortunately, I've deferred the algorithm's implementation for the time being because the system first needs to have a like aggregation workflow in place.

Friday, February 17, 2012

Day In Review

The Backbonification (the term of choice at 8th Light) of the Artisan Story Board has been interesting so far. I'm enjoying the opportunity to approach the work I have to do in a iterative modular fashion. As I finish tasks I can start to put them into place independently of the work that is still yet to be done. This is afforded to me by the structure of Backbone.js, which I'm finding to be a really nice library to work with.

I'm also enjoying the opportunity to demystify the storyboard, to an extent. Backbone allows the responsibilities to fall into place naturally and it works out nicely. On one occasion did I have a little trouble finding the correct way to move forward. I have lots of redundancy in two views, namely a view of a story in the backlog and a view of a story on the story board. I first approached the problem with inheritance, but I didn't like my solution. The base class was useless, it was a view without a render method. I then moved away from inheritance and tried to use the Builder pattern to construct the common view and allow each individual view to work from there. I then backed away form this idea because it didn't feel natural. Ultimately, I went back to inheritance but forced the base class to implement the render algorithm but defer certain data plugins to the child classes. I then forced the base class to be abstract (not truly possible in javascript) by forcing an exception to be thrown when instantiating the base class. I like the new design and I feel as though it's a good pattern moving forward.

Wednesday, February 15, 2012

Day In Review

I completed the iteration dedicated to my HTTP Server. During my iteration meeting Paul and I discussed the one logical error in the architecture. Instead of relying on one Client Implementation to interface with an application, we should interface with one to many Clients and allow the routes file to specify which client a verb + path route will hit. This made sense because rails works in very much the same way (Rails, however, has a little more leeway since duck typing allows routes to hit an arbitrary method on a controller whereas I am using the Command Pattern to interact with a Client interface).

This makes the wiring a little trickier because instead of the user specifying the single Client to use, the server must now find all Client Implementations within the Application's jar file. This is, however, possible since java reflection offers the ability to ask for base classes. I plan on revisiting this in my free time, hopefully this weekend.

Moving forward I am going to be working on moving the Artisan Story Board to Backbone.

Monday, February 13, 2012

Day In Review

I have a 1.0 version of my HTTP server available for use. You can find it and download the .jar on my github. I feel fairly pleased with how the server has turned out. I was able to put together a simple Client app to pass all of the cob_spec acceptance tests. I also benchmarked my server and was able to handle 10,000 requests in ~16 seconds. I hope to improve on this performance moving forward by tweaking a couple of implementations. I also hope to introduce relative paths for the command line arguments because the absolute paths are a bit ridiculous to pass in to start the server.

I was able to use reflection to reference the Client Implementation. This turned out to be simpler than I had imagined, it involved loading a jar and then referencing the class in its specific package. I ran into one minor bug with the data passed in the PUT and POST types. I fixed my 'Analyst' to handle the edge cases and now feel confident after passing the cob_spec.

Sunday, February 12, 2012

Acceptance Testing For Student Projects

As a college student majoring in Computer Science one can expect multiple short to medium length projects per class. Almost all (almost being 99.99%) students will, at some point, feel as though they have finished an assignment only to find that during their last sprint towards completion they had broken a feature. We all know the drill, we think we have finished and start checking our input and output against what it required. Then we notice that some tiny detail is now broken. When this would happen to me, and it almost always did, I would have a feeling of despair sink into my stomach. If it was the night before the project was due I would put on a pot of coffee and prepare for the proverbial all-nighter. This does not have to be the way college projects go. In fact it should not be and that is because all programmers have acceptance testing as their disposal.

Acceptance tests, "act to verify that the system is behaving as the customers have specified" (1). What does this mean to a student, since they have no customers? Think of it this way: as a student you must approach assignments as if it were your job, as if your financial stability and your reputation were at stake. Your teacher is your customer. You want to make sure that your customer is happy and for that reason you want acceptance tests to assure their happiness. In fact, how can you assure yourself the highest grade possible other than by proving that you deserve the highest grade possible? How can you assure your customers' happiness other than proving that their desires are fulfilled?

Of course you want the highest grade possible and of course you want to avoid all the terrible emotions and time that go in to fixing a project that had been previously working. The first step towards school project zen is to pick an acceptance testing framework. Then, begin to translate the requirements into a high level system requirement. Think to yourself, how will the user use the system? Let's look at a quick example from when I was in school. One of my projects involved writing a Chained Hash Map implementation. We needed to be able to add, delete, and lookup within the Hash Map. Excellent. Here is an example of high level design.


I write these first since these are the features I need to have implemented. As I code I am actively working towards completing these high level goals one step at a time. In fact, since this is a chained hashmap let's write one more test because we thought ahead.


I start at the top and work my way down. Notice how these steps are agnostic to implementation. The only way they point is towards succeeding in satisfying the client, not towards low level implementation. As I progress I write the steps in to fulfill the scenarios I have laid out. Best of all, as I move from one feature to the next I can assure that I have not broken previously working functionality. When I'm done I have assured my project's completion and my professors' (customers') happiness. Be sure to read up on your framework because each specifies their scenarios differently. Also, each framework glues their scenarios to their test code differently.

The beauty of acceptance testing is the quick warnings it can provide. When you are working and come to a stopping point just run your acceptance tests. If you have a failure see if it can be resolved quickly. If it can not, at least you now know (ahead of time) that the path you were heading down is the wrong path for completion of your project. Acceptance tests offer a red light to bad coding paths and a green light to good paths and ultimately project completion. Next time you receive a programming project take the time to write up acceptance tests, your future self will be thankful that you have.

1) Martin, Robert C. Agile Software Development. New Jersey: Pearson Education, 2003. Pg 13.

Thursday, February 9, 2012

Day In Review

Today, my server and tic-tac-toe game went their separate ways. My Server received the name HT(TPS) Report, which is a reference to the most excellent movie, Office Space. My tic-tac-toe game is now a single jar and the TicTacToeClient, which implements the Client interface of my server, now lives inside that jar. That means that applications are able to use my server as a dependency and register themselves for the server.

I also have text file routes now. The user can specify a routes file and pass that in on server initialization. The end of the day involved me getting my head around the next feature. I will be allowing the user to specify a jar with a package that contains their Client Implementations and pass that, as a command line parameter, to the Server on initialization. I will then be using reflection to get the Class and make instances of that Class when necessary.

I also added command line arguments with JArgs. I highly recommend JArgs, it's lightweight and very effective.

Day In Review

I feel as though the design of my server is starting to pay off. After I did some very useful refactorings to my server it started to feel flexible and easily extensible. I was able to add a DB Persistor fairly easily. Implementing the DB Persistor was another story.

I first thought that I wanted to serialize my objects into strings to avoid any type problems in the database. There is an awesome library named Gson which serializes java objects into JSON. The user specifies the types expected upon deserialization and it works like a charm. There was one rough spot, however, and that is with interfaces. If you serialize an object nested with references to interfaces then deserialization becomes hell. The deserializer does not know how to recreate the concrete implementation of the interface since the deserialization type only contains nested interfaces.

In the end, I avoided that mess by making my database hold references to Object. I then type-casted when pulling references back out and it worked no problem.

Tuesday, February 7, 2012

Day In Review

In the morning I continue working on my HTTP server, working towards finishing the large story of my iteration. I had everything working except for one little pesky bug which in turn stopped me from completing my iteration. Of course, after my iteration I found the problem and had everything working. When I was reading the input stream of an incoming HTTP packet I had an if statement that asked if the input stream was ready to be read. This would fail ~1/100 times which allowed it to go unnoticed for a long period of time (and allow me to pass my acceptance test). In the end I added a while loop that sits and waits until the input stream is ready to be read.

In the afternoon I had my IPM with Paul. I made my stories smaller in scope and in points in order to avoid missing an iteration again. I will also be narrowing the scope of my recent blog post 'Students, Take Notice.' The new post will focus on acceptance testing for small school projects.

Monday, February 6, 2012

(Retro) Day In Review

On Friday I spent the morning working on my HTTP Server. After 8th Light University, we had a discussion about what the University events should look like moving forward. I felt like it was a productive meeting to discuss the changing style of the event (as more and more attendees are present). We also had a quick meeting for Artisan.

Afterwards, I took a look at the Jukebox code base in an attempt to try to implement a new feature. On the to-do list was a feature request for a hammertime to pause the system upon completion. The idea being that if a standup hammertime plays then the users will not want jukebox to continue afterwards to it can be quiet for standup. I plan to revisit the little work I had on this when I have some free time.

Students, Take Note

After graduating college and becoming exposed to the large array of ideas in industry, I started to reflect on what it would have meant to have had exposure to these ideas while in school. I wonder how much time and frustration I could have avoided with three specific disciplines I have since only begun to understand. In all fairness, I did not avoid them, however; I was not actively seeking them out. I suspect most Computer Science related majors are in the same position. These ideas are not necessary to graduate. They are not even necessary to do well, school is not designed in this way. They are, however, necessary for good design and programmer sanity. What ideas am I speaking of? Acceptance Testing, Test Driven-Design and The Single Responsibility Principle.

I put acceptance testing first in the list for a reason. Acceptance testing is a little bit harder to grasp than test-driven design and this is probably due to acceptance testing being at a higher level. Acceptance tests, "act to verify that the system is behaving as the customers have specified" (1). What does this mean to a student, since they have no customers? Think of it this way: as a student, you must approach assignments as if it were your job, as if your financial stability and your reputation were at stake. Your teacher is your customer. You want to make sure your customer is happy and for that reason you want acceptance tests to assure their happiness. In fact, how can you assure yourself the highest grade possible other than by proving that you deserve the highest grade possible? How can you assure your customers' happiness other than proving that their desires are fulfilled?

Of course you are saying, 'well duh I want to get the highest grade possible.' Great. Pick an acceptance testing framework suitable to your situation. Then began to translate the requirements into a high level system requirement. Think to yourself, how will the user use the system? Let's look at a quick example from when I was in school. One of my projects involved writing a Chained Hash Map implementation. We needed to be able to add, delete, and lookup within the Hash Map. Excellent. Here is an example of high level design.



I write these first since these are the features I need to have implemented. As I develop I am actively working towards completing these high level goals one step at a time. In fact, since this is a chained hashmap let's write one more test because we thought ahead



I start at the top and work my way down. Notice how these steps are agnostic to implementation. The only way they point is towards succeeding in satisfying the client, not towards low level implementation. As I progress I write the steps in to fulfill the scenarios I have laid out. Best of all, as I move from one feature to the next I can assure that I have not broken previously working functionality. When I'm done I have assured my project's completion and my professors' (customers') happiness. Be sure to read up on your framework because each specifies their scenarios differently. Also, each framework glues their scenarios to their test code differently as well.

Throughout school I can remember having the feeling of being finished with an assignment only to begin my manual testing before submission. I would click around, type some input, and read back the output and assure that it was correct. Then, I would have this feeling of horror in my stomach when a previous feature now failed. My stress level would instantly rise and I would spend an inordinate amount of time in a loop of fixing a feature to find out, with manual testing again, that I had broken my program somewhere else. Once one gets into the habit of acceptance testing that loop changes drastically, and for the better! The stress of breaking some far part of the system is mitigated by instantaneous feedback when it happens. We instantly know if some implementation of a feature was done incorrectly because our test will tell us.

The next idea I wish I had known about in college was Test-Driven Development (TDD). This is testing at a lower level than acceptance testing. TDD is the process of testing each individual modular piece of code in the system. In fact, it's not just testing the code, but testing the code before it is written. Do not worry, it's not as bizarre as it sounds. When I begin to write a new class, or a new method I first write a failing unit test. I specify, somewhat like an acceptance test's scenario, what the module should do. The unit test is agnostic to implementation, it just checks to make sure that the implementation works. I watch the test fail. Then, I implement. If my first try does not work exactly as I intended, I immediately receive feedback on what went wrong. Why is this a good idea? Where acceptance testing is to ensure the happiness of your client, test driven development is to ensure the happiness of yourself and your group. If my test passes then I am assured that this modular piece of code conforms to my automated test's standards. Writing a test first forces the implementation that passes the test to be flexible enough to run independently in a series of unit tests. This flexibility goes a long way. In fact, when I write unit tests I don't expect the flexibility to pay off initially. I expect the flexibility to pay off over time and in ways I can not yet imagine.

I might be speaking for myself, but I can remember writing monster functions as a student. When I changed one little line in a twenty line function, it would break for one case and not all of the other cases. The changes always seemed simple enough, but they never were. If I had written tests I would immediately know what had broken and where. A suite of unit tests is excellent at pinpointing where the code changes went wrong. Couple unit tests with acceptance tests, and software's reliability and flexibility increase, and as reliability and flexibility increase, your stress level goes down since the development process starts to contain less bugs and less surprises.

The Single Responsibility Principle (SRP) should be thought of as going hand in hand with TDD. SRP is part of the SOLID principles, but I want to talk about this one principle in isolation. As a student one problem I had with code, that I can now reflect on, was large unwieldily methods and classes with far too many responsibilities. It is important to let TDD drive SRP in your code. When you write a new test, ask yourself, is this testing one responsibility? If not, how can I rethink my design to do so? The flexibility afforded by TDD is multiplied by SRP. When we divide responsibilities fairly, one per module, we do not hide side effects in our code. We explicitly handle our use cases purposefully. When we do not hide side effects we do not have surprises lying in wait. When we use TDD we can point to a single responsibility that has broken down in our system and know with confidence that the broken test is only doing one thing, it only has one responsibility.

SRP is another tenet of flexible design. When we break apart our responsibilities we allow ourselves to change any one use case rather easily. We isolate the responsibility to change. We find our test for the responsibility and change the assertions to match our new use case. We then watch as the test fails. We find the corresponding module to change. We know that the use case is safe to change because it is isolated. We pass our test. We run our unit test suite and watch as they pass. We run our acceptance test suite and watch as they pass. All the while we can think about the surprises and stress we had avoided.

1) Martin, Robert C. Agile Software Development. New Jersey: Pearson Education, 2003. Pg 13.

Day In Review

The day started with the bi-weekly Hosemonster IPM. Thankfully, the iteration is slow enough and laid back enough for us apprentices to take part and have the opportunity to learn. It's nice to both take time to learn about process, good code, and simultaneously deal with a live client. I enjoy the feedback of an IPM because the direction of the application comes forward while at the same time the client and team work out the details of the big picture. It's nice to see both of those perspectives come together at one time.

After the IPM I attacked a bug in the PDF generation for Work Orders. When the app split into two apps (Hosemonster and Limelight-Hosemonster-UI) there was a weird break in the templating for the PDF generator. I felt accomplished when I tracked the bug down because I was able to restore the previous functionality and remove a superflous function from the namespace. The solution felt cleaner and I was proud of that.

I also paired with Wei Lee on a big of refactoring in preparation for plugging in the graphs we had been working on for quite some time.

Friday, February 3, 2012

Day In Review

My HTTP Server's refactoring has been going fairly well. Currently, the Packet and PacketParser classes are messy and I am slowly refactoring their behavior. I'm also pulling the responsibility of generating the return string of a packet out from the Packet class. I plan to let the Packet be only a hash of HTTP packet attributes. I will then use the presenter pattern and make a Packet Presenter to generate the outgoing packet.

I've also run into some trouble. Before, I kept my in memory data store on the top level of the server. Now, I need to find a different way to persist the state of Tic-Tac-Toe games since the top level is agnostic to the implementation and can no longer hold the data store. I've been thinking over the possibilities, but it's hard to choose something to refactor towards. Once I have my Packet workflow refactored I will be tackling this decision.

Wednesday, February 1, 2012

Day In Review

I completed my second apprenticeship iteration today and started a new one. This week I will be contributing three points to hosemonster and the remaining time will be spent refactoring my HTTP server from last summer. Currently, the server is integrated with my Tic-Tac-Toe game. I had thought that it was heavily intertwined, but I'm finding that to not be the case. I'm missing a few key server abstractions, but the refactoring is going fairly smoothly after one day.

One atrocity of the code base was the status of my test. I ran the tests when I first pulled the project and they froze. Why did they freeze? They were waiting for STDIN input to move forward with a specific tests. This was no good, so I started handrolling mocks and fixing the state of my tests. After I felt comfortable with the tests in place I moved towards refactoring.

I've been thinking about a way of metaphorically naming my server components at a high level. One idea I like is to use business metaphors, as if the server were an office. The forward facing ServerSocket piece would be the 'Receptionist.' The receptionist would pass the socket off to a 'Middleman,' which I was originally calling the Dispatcher. The middleman would then pass off to the 'CEO.' The CEO metaphor probably won't stick, I don't like it. The CEO is currently the high level thought relating to the interface to the business logic that is ignorant of the framework. The naming scheme is a work in progress.

Monday, January 30, 2012

Clojure Code in Java

There's been a lot said about using Java in Clojure code. However, the circumstance may arise when you need to write Clojure code to run in Java. It is a little more roundabout and a little less clean, but it is possible. I'll be going over the gen class option available for the interop to take place.

Let's look at a simple example.


What we have available now is a package test with class Speak. We can instantiate an instance of speak. We can call hello("my name") on our instance and I'm sure you can guess what happens. One caveat to notice is that we must prepend our function names with '-' and we must declare our function's in the top gen-class block. We can then get a little fancier. Here's the same example with a global variable.



Now we can set the name on our instance before we call the speak function. Notice the :state declaration in the top block. We can only have one state variable, so use it wisely. Here, I intend to only use it to hold the name to say hello to. A good pattern is to use the state option to store a map of all the variables one might need since we are limited to one state variable.

Day In Review

I was on the Hosemonster project again today. We were able to finish the graphing framework and completed fitting curves to a set of data points. It felt good to have a finished product that we will then plug into the application.

The second half of the day was dedicated to refactoring. The new, edit, and view pages for our models were all basic and contained many repeating portions. In order to stay DRY and to help keep the system simple we started refactoring to collapse the new, edit, and view pages of our models into one page that then tweaks itself dependent on a parameter corresponding to a action which is passed down. I liked this refactoring since it meant we had a pattern to follow as more and more models are introduced to the system.

Another refactoring we had was to collapse the Interactor's update and create functions into one save function. Again, these two functions contained a lot of the same lines and in order to stay DRY we collapsed them upon each other. This change was fairly simple since we need to create if the model does not have an id and update if it does. This was the only different in the two functions.

Sunday, January 29, 2012

Using External Dependencies For Specific Use Cases

I am going to recount a recent learning experience I encountered while working on a Rails Project. The ideas I came away with are not Rails specific, however, and are applicable to all software. The learning experienced involved me leaning on third party libraries. This lean later turned into a fall and I realized that third party libraries are to be used for a specific case and only for that case. Dependencies are for what you yourself can not do on your own in a reasonable time period. I learned that we must be careful with the libraries we make use of since their behavior is outside of our control even if we believe to have them pinned down in test.

ActionMailer is a gem most Rails programmers are familiar with. For a story I was completing I needed to email notifications and I knew I was not going to implement a mailing system in a reasonable time period. I then introduced ActionMailer and wrapped the functionality. I had the system under test and I felt pretty confident with what I was using ActionMailer for. My tests were green. The code then went live and I quickly learned that I had made a big mistake. I started receiving ActionMailer generated exceptions. What had happend? I was green when I had committed!

Well, I was green and it was a false positive. ActionMailer has a different set of rules for test and for production. In my case I leaned on ActionMailer and got burnt. I took an array of email addresses and joined them by commas to produce a string to be used as the recipient list of my generated mail. When there were no array entries this produced an empty string. When I sent an empty string to ActionMailer in test it essentially disregards it. Great, I thought, ActionMailer handles my empty email list case! WRONG!

In production an empty string as a recipient list with ActionMailer produces an exception. When the operation is fairly common it produces a lot of exceptions. It was my fault for not narrowing my usage of ActionMailer to the specific cases in which it was actually needed. I instead used it for my empty string case. The moral of story, external dependencies are for specific use cases and nothing more.

Friday, January 27, 2012

Day In Review

Yesterday, I worked on Artisan for the entire day. I paired with Myles for the whole day, since Myles is new to Artisan and unfamiliar with the code base. This was a good experience for the both of us, since Myles was introduced to the code base, and for me since I had to articulate the components of the system. It's always a good idea to articulate, at a high level, what each component of a system does. It forces you to think about your system at a high level and the missing abstractions seem to bubble up when you talk about a system in this way.

Specifically, Myles and I were able to have the iteration new/edit form working as a modal. This keeps the workflow contained on the storyboard page, which was the goal of our work. The changes actually pointed us towards a missing abstraction and implementing the abstraction for iteration presentation felt good. It made the change feel painless and allowed the existing code to stay largely in tact.

The second half of the day we revisited the storyboard column sorting bug. We fixed the bug, however we introduced redundant behavior (sorting when it is unnecessary) and are having trouble cutting this behavior out.

Wednesday, January 25, 2012

Day In Review

I started the day off working on the storyboard sorting bug. This is a tricky bug to get rid of because of the way the behavior works. Sortable, the jquery-ui widget, allows callback functions for an update in a single column and a receive event from another column. What is happening is a column drag for a single story will trigger an update event in the original column, an update event in the new column, and a receive event in the new column. This is fine and dandy, however, we trigger a frivolous update event in the original column and it's wasteful. I've been trying to find a way to avoid the update event, however, it's hard to define the behavior in such a way to do the event in one column and not in the other. This process is ongoing and I plan on revisiting it tomorrow.

The second half of the day I dedicated to hosemonster and the graphing functionality. I have been spiking out ways to generate curve fitting functions given a set of data points. In the morning I, out of curiosity, entered a sin function and got a jagged heart-monitor looking output. I hypothesized that calculating more points would smooth the line and it did, I had a continuous sin function. This is when I got really excited, what a cool thing to have happen.

The next problem was figuring out how to calculate the form fitting function. The solution involves creating an arbitrary length polynomial whose degree is determined by the number of points supplied. To create the polynomial involves the use of Gaussian Elimination to solve for the coefficients of the function. Wai Lee and I were able to get a polynomial class under test, use the test data provided, plug in the polynomial generator, and create form fitting curves for data points.

Tuesday, January 24, 2012

Clojure Map to Ruby Hash Kata

I performed the Clojure Map to Ruby Hash Kata and have the video available. Unfortunately, I wasn't able to include audio to introduce the kata and walk through some of the steps. That being said, you should be able to follow along. Enjoy.

Saturday, January 21, 2012

Day In Review

After completing some stories for Artisan, I needed to go back and review the work I had done. Unfortunately, there were some items that needed work. This was helpful to review, since it's part of the learning process. One big mistake I had made is probably a common one among Rails developers, I had missed a key abstraction and placed behavioral code in my controller.

I had gone from the highest level, an incoming HTTP request to the lowest level, talking to the database, all in one controller method. Instead of doing this, I pulled the behavioral code into an interactor that mapped to an existing model. This turned out to be great for the codebase because when the interactor first came together I began to notice responsibilities that lived hodge-podge in the system that ought to be shifted to this new abstraction. It was like taking a weight off the shoulders of many system components and helped reduce code duplication, which is always a win.

Tuesday, January 17, 2012

Clojure records, types, and protocols

Clojure offers a way to structure data in the form of records, types, and protocols. Even more interesting, functions (behavior) can be added to these structures. If you are coming from an object-oriented background this probably sounds familiar, but there are differences that are both small and large worth noting.

Protocols, in clojure, are roughly equivalent to an interface in Java. A clojure protocol, like a java interface, defines a contract that an implementer agrees to when implementing a protocol. Simple enough right? Not so fast, what might strike someone as strange (and can be a potential pitfall) is that concrete implementations of a protocol are not required to implement all of the methods defined on the protocol. That means that when using a protocol there are not guarantees that a function in the contract of the protocol is implemented on the concrete record or type. So, when receiving or dealing with a protocol, as an abstract contract of available functions, the record or type behind the protocol might not be living up to its end of the contract. This is something to watch for when using protocols.

Records are a way to structure a persistent map. When defining a record the user can define all of the fields they wish to hold within the map. These fields become the keys and the values are populated during creation of an instance of a record. What is different from object-oriented programming is that the values of the fields never change after the record is instantiated, the data is immutable. When will a record be useful? When we want to define a set of fields that is commonly repeated and then provide a common way to populate the fields and refer to the map. Now that we understand the data side, how does the behavior side fit in? The only way to place functions within the record is to implement a protocol. Once a record makes use of a protocol it is then able to implement as many (or as few) of the protocol's abstract functions as it needs. Remember, however, that the data is immutable and the functions can not manipulate the stored data.

Types are very similar to plain old java objects (POJOs). Where a record is a map, a type encapsulates its data using the dot-notation, as one would use in most any object-oriented language. As with records, types must implement a protocol to have functions and is limited to the functions available on the protocol. Also, as with records, types do not need to implement all of the functions of a protocol.

After reading the high level overview it would probably be nice to look at some code:


Monday, January 16, 2012

Day In Review

Today was dedicated to the hosemonster project. The models on the project keep expanding and that has been keeping myself and my pair busy. I really like the view / model / interactor separation that we have implemented. The views push information down the interactor which encapsulates the models. In return the interactor will push information back to the UI. This keeps a nice separation and I enjoy the implementing the idea of tell, don't ask in the the workflow.

Sunday, January 15, 2012

Day In Review

On Friday I can continued working on the hosemonster project. I paired on adding fields and validations (lots of fields an validations) to our existing models. We had received a new specification for some models and this was the first step in implementing the specification. At lunch, Dave gave a great talk on building web sites in Clojure. I enjoyed how it was given in a framework agnostic point of view and described the commonality between all Clojure web frameworks. For Waza, I paired on Metis with Myles and Wei-Lee and discussed what methods should be on the interface for the Data Store. In the end we decided on simplicity in only offering create, update, save (update and if not available, create), find-all (attribute matched), find-first, find-by-id, delete-all, delete-first, delete-by-id. We avoided the ActiveRecord-esque temptation to use macros and offer more methods since we wanted a simple interface for all types of data stores to inherent. We then begin implementing the in-memory data store. We didn't finish and left the in-memory data store as a work in progress.

Thursday, January 12, 2012

Namespace Your Javascript

Imagine, if you will, any small non-trivial program, whether it's in an OOP, functional, or procedural language. To make this post easy to follow, think of Tic-Tac-Toe. What are you imagining? Maybe some kind of board data structure probably abstracted away in a container, some kind of game logic container, and so forth. This is good, this is the kind of separation that we ought to strive for when building a Tic-Tac-Toe application. Now think about the last web app you worked on. Think about your javascript file(s). Did they have the same kind of separation? If so, then stop reading. However, it is very likely that it did not have the proper level of separation. Did you have a single applicaton.js? Were you sending it for every single page load? Yikes! Did it make use of other libraries that you needed to send over on every page load? Double yikes! Maybe it looked something like this:


What we can see here is two very different responsibilities. One, we want to hide and show some_div depending on certain button clicks. Two, we want to sum a column as feedback for a form submit. Instead of namespacing these two concerns, we've cluttered the global namespace and gave functionality to every div with id some_div, every button with id some_button/some_other_button, and trigger a call back on every form submit. Furthermore we make event bindings in document ready. What happens when the specifications change and now afer totalling the column we must trigger extra events depending on the total? We'll just throw another function on there right? WRONG!

We're going to namespace. For this blog post we'll just work this simple example, and therefore, we will namespace simply. For more complex applications with many seperate files take a look at this namespace function. So, we refactor and now we have a namespace for each seperate concern.


Now, our global namespace contains only Visibility and ColumnReporter. We've moved our bindings out of document ready which means we can put them under test! The code looks cleaner, it's organized with a purpose. Furtheremore, we can separate the code into two files and send over each file only when necessary. Of course, there's more refactoring to be done, but we're on our way to clean javascript without a cluttered, bloated global namespace.

Day In Review

I began yesterday by working on the hosemonster project, but soon after stand up Micah had a task for me to complete. Micah asked for a Clojure Map to Ruby Hash converter. It was to be written in clojure, take a map as a parameter, and then return a string representation of a ruby hash. The source is found here. It was a good learning experience since I went through a few refactorings. It was also good to practice recursively iterating through a map since this is the natural way of iterating in Clojure.

The second part of the day Micah wanted the converter to be plugged into his code sparring app in order to allow more languages to participate. Ultimately, I failed to to have it return ruby hashes from the app, but I plan on giving it another go soon.


Tuesday, January 10, 2012

Day In Review

Working with Clojure is a paradigm shift and a new route to explore on my software journey. I am enjoying taking on a language that differs from Ruby in many ways and offers a change of pace. Decoupling functions from data encapsulation has advantages that I'm starting to see. Although Clojure can have side effects, I like the emphasis on side effect free functions. On the flip side, it offers different challenges and requires a different perspective to accomplish tasks in code. The constant nesting of functions can be dizzying at first, but by the end of the day it started feeling more natural. I must give a big shout out to my pairs for the day since they were patient with me while taking in the new code base. I'm looking forward to continuing down the Clojure path as I feel that there are many interesting ideas to explore through Clojure and functional programming as a whole.

After the work day ended, Uncle Bob gave a talk on Professionalism in Software. He spoke at length about the "inverted the pay structure" of software development, emphasizing that software's details are only fleshed out in source code. Uncle Bob spoke about using professional methodologies that accept this fact. Afterwards, I was able to speak to some attendees who work at non-Agile shops. Their experiences were interesting to hear about and I took away some understand of what it means to not follow the methodologies I have been following in my very short time in industry. Moving forward I will take these lessons into account as I try to discipline myself to work more and more professionally.


Monday, January 9, 2012

Day In Review

Today, I began by picking up a three point Artisan story. I like the abstractions between input and presentation that we are building in Artisan. The current story asks for emails to be sent to a configurable list of people depending on certain events happening. I was able to place email handlers into the classes that process these change events and it felt very painless, which is great. The email handler themselves are passed high level objects and pulls out the necessary data for email presentation, which removes that responsible from other locations in the code. Moving forward, I want to make this email presenter class easily extensible so that developers in the future can easy manage mass emails in the same way.

The second portion of the day I spent working on the Clojure Koans. Thankfully, I have had the opportunity to pair briefly with Myles on Metis, an ORM for Clojure. Unfortunately, event after pairing, Clojure still feels a bit weird to me. I'm purposefully moving slowly and constantly referencing the documentation to try and get a good feel for Clojure.