Monday, March 19, 2012
The Interface Segregation Principle in Dynamically Typed Languages
Wednesday, March 7, 2012
Day In Review
Tuesday, March 6, 2012
Day In Review
Monday, March 5, 2012
Day In Review
Sunday, March 4, 2012
Hot Swapping In Java
(Retro) Day In Review
Wednesday, February 29, 2012
Day In Review
Tuesday, February 28, 2012
Day In Review
Monday, February 27, 2012
Ruby's Functional Programming
Ruby is a fully object oriented language and is, in fact, so dedicated to objects that every entity in the language is an object. After hearing that statement it might seem a little weird that ruby also has functional support. Ruby's functional aspects are powerful and complete which makes it worthwhile to learn. We can invoke operations such as reduce and filter with the use of code blocks. We can also pass functions around directly with the use of lambda and Proc objects. In this post I'll show you some of the more powerful tricks, show their potential pitfalls, and document the experience with code to play with.
Each method call can be given an additional code block. If you don't believe me then put a block on the end of every method call that currently does not have one and watch as (almost) everything still works as intended (please don't really do that). I'll introduce code to show this, but first let's see a straight forward imaginary workflow:
Now get_name just returns the name attribute of an object, as you might guess. If we wanted to use this to get the name of an employee we would first find the employee and then pass it in here as a parameter as we have done above. Let's show the same workflow, but this time let's allow the code block to give use the name:
Cool right? We can build any employee interaction we want off of get_employee using code blocks. This is an imaginary case, however, and code blocks aren't always the best option so use them wisely.
Code blocks are a part of some of the standard library's methods. It allows us to make use of functional ideas in ruby code. For example, let's look at an inject that sums all of the elements given multiplied by two.
These are powerful expressions because the intermediary steps (i.e. the summations and a single element's multiplication by two) are stateless. We simply put data into the inject and a single answer comes out with no side effects. Other such functional actions include (but are not limited to) reduce, collect, and reject. Ruby's Enumerable has lots of functional methods.
The last functional item I want to share is the use of closures. In Ruby we can make use of lambda, Proc, and method(:name) to create closures. They all appear to be very similar, but have subtle differences. For the sake of learning we will ignore the subtle differences and use Proc to explain the concept. Procs are objects that encapsulate executable code. With a Proc we can bundle up some code and pass a Proc object around until we are ready to call it. For a simple example let's look at the following:
This should feel very similar to the code blocks we had discussed earlier. This is because code blocks are a type of closure! Think of Proc's as a code block that can be held for later use. Let's explore closures a little more:
What happened here? We invoked gen_bar in the Example::Bar object and therefore Bar.new should invoke a new Example::Bar, right? Wrong! Procs are always evaluated in their declared scope. That means that in this case the Proc was executed in the context of module Foo even though it was called in module Example. This is something to keep in mind as closures are passed between classes and modules.
Functional concepts in Ruby can make coding easier, cleaner, and more expressive. It's important to understand the concepts in order to use them correctly when a problem being faced could use a functional solution.
Day In Review
Sunday, February 26, 2012
(Retro) Day In Review
Thursday, February 23, 2012
Day In Review
Wednesday, February 22, 2012
Day In Review
Monday, February 20, 2012
Decision Deferment
Day In Review
Saturday, February 18, 2012
Day In Review
Friday, February 17, 2012
Day In Review
Wednesday, February 15, 2012
Day In Review
Monday, February 13, 2012
Day In Review
Sunday, February 12, 2012
Acceptance Testing For Student Projects
Thursday, February 9, 2012
Day In Review
Day In Review
Tuesday, February 7, 2012
Day In Review
Monday, February 6, 2012
(Retro) Day In Review
Students, Take Note
I put acceptance testing first in the list for a reason. Acceptance testing is a little bit harder to grasp than test-driven design and this is probably due to acceptance testing being at a higher level. Acceptance tests, "act to verify that the system is behaving as the customers have specified" (1). What does this mean to a student, since they have no customers? Think of it this way: as a student, you must approach assignments as if it were your job, as if your financial stability and your reputation were at stake. Your teacher is your customer. You want to make sure your customer is happy and for that reason you want acceptance tests to assure their happiness. In fact, how can you assure yourself the highest grade possible other than by proving that you deserve the highest grade possible? How can you assure your customers' happiness other than proving that their desires are fulfilled?
Of course you are saying, 'well duh I want to get the highest grade possible.' Great. Pick an acceptance testing framework suitable to your situation. Then began to translate the requirements into a high level system requirement. Think to yourself, how will the user use the system? Let's look at a quick example from when I was in school. One of my projects involved writing a Chained Hash Map implementation. We needed to be able to add, delete, and lookup within the Hash Map. Excellent. Here is an example of high level design.
I write these first since these are the features I need to have implemented. As I develop I am actively working towards completing these high level goals one step at a time. In fact, since this is a chained hashmap let's write one more test because we thought ahead
I start at the top and work my way down. Notice how these steps are agnostic to implementation. The only way they point is towards succeeding in satisfying the client, not towards low level implementation. As I progress I write the steps in to fulfill the scenarios I have laid out. Best of all, as I move from one feature to the next I can assure that I have not broken previously working functionality. When I'm done I have assured my project's completion and my professors' (customers') happiness. Be sure to read up on your framework because each specifies their scenarios differently. Also, each framework glues their scenarios to their test code differently as well.
Throughout school I can remember having the feeling of being finished with an assignment only to begin my manual testing before submission. I would click around, type some input, and read back the output and assure that it was correct. Then, I would have this feeling of horror in my stomach when a previous feature now failed. My stress level would instantly rise and I would spend an inordinate amount of time in a loop of fixing a feature to find out, with manual testing again, that I had broken my program somewhere else. Once one gets into the habit of acceptance testing that loop changes drastically, and for the better! The stress of breaking some far part of the system is mitigated by instantaneous feedback when it happens. We instantly know if some implementation of a feature was done incorrectly because our test will tell us.
The next idea I wish I had known about in college was Test-Driven Development (TDD). This is testing at a lower level than acceptance testing. TDD is the process of testing each individual modular piece of code in the system. In fact, it's not just testing the code, but testing the code before it is written. Do not worry, it's not as bizarre as it sounds. When I begin to write a new class, or a new method I first write a failing unit test. I specify, somewhat like an acceptance test's scenario, what the module should do. The unit test is agnostic to implementation, it just checks to make sure that the implementation works. I watch the test fail. Then, I implement. If my first try does not work exactly as I intended, I immediately receive feedback on what went wrong. Why is this a good idea? Where acceptance testing is to ensure the happiness of your client, test driven development is to ensure the happiness of yourself and your group. If my test passes then I am assured that this modular piece of code conforms to my automated test's standards. Writing a test first forces the implementation that passes the test to be flexible enough to run independently in a series of unit tests. This flexibility goes a long way. In fact, when I write unit tests I don't expect the flexibility to pay off initially. I expect the flexibility to pay off over time and in ways I can not yet imagine.
I might be speaking for myself, but I can remember writing monster functions as a student. When I changed one little line in a twenty line function, it would break for one case and not all of the other cases. The changes always seemed simple enough, but they never were. If I had written tests I would immediately know what had broken and where. A suite of unit tests is excellent at pinpointing where the code changes went wrong. Couple unit tests with acceptance tests, and software's reliability and flexibility increase, and as reliability and flexibility increase, your stress level goes down since the development process starts to contain less bugs and less surprises.
The Single Responsibility Principle (SRP) should be thought of as going hand in hand with TDD. SRP is part of the SOLID principles, but I want to talk about this one principle in isolation. As a student one problem I had with code, that I can now reflect on, was large unwieldily methods and classes with far too many responsibilities. It is important to let TDD drive SRP in your code. When you write a new test, ask yourself, is this testing one responsibility? If not, how can I rethink my design to do so? The flexibility afforded by TDD is multiplied by SRP. When we divide responsibilities fairly, one per module, we do not hide side effects in our code. We explicitly handle our use cases purposefully. When we do not hide side effects we do not have surprises lying in wait. When we use TDD we can point to a single responsibility that has broken down in our system and know with confidence that the broken test is only doing one thing, it only has one responsibility.
SRP is another tenet of flexible design. When we break apart our responsibilities we allow ourselves to change any one use case rather easily. We isolate the responsibility to change. We find our test for the responsibility and change the assertions to match our new use case. We then watch as the test fails. We find the corresponding module to change. We know that the use case is safe to change because it is isolated. We pass our test. We run our unit test suite and watch as they pass. We run our acceptance test suite and watch as they pass. All the while we can think about the surprises and stress we had avoided.
1) Martin, Robert C. Agile Software Development. New Jersey: Pearson Education, 2003. Pg 13.
Day In Review
After the IPM I attacked a bug in the PDF generation for Work Orders. When the app split into two apps (Hosemonster and Limelight-Hosemonster-UI) there was a weird break in the templating for the PDF generator. I felt accomplished when I tracked the bug down because I was able to restore the previous functionality and remove a superflous function from the namespace. The solution felt cleaner and I was proud of that.
I also paired with Wei Lee on a big of refactoring in preparation for plugging in the graphs we had been working on for quite some time.
Friday, February 3, 2012
Day In Review
I've also run into some trouble. Before, I kept my in memory data store on the top level of the server. Now, I need to find a different way to persist the state of Tic-Tac-Toe games since the top level is agnostic to the implementation and can no longer hold the data store. I've been thinking over the possibilities, but it's hard to choose something to refactor towards. Once I have my Packet workflow refactored I will be tackling this decision.
Wednesday, February 1, 2012
Day In Review
One atrocity of the code base was the status of my test. I ran the tests when I first pulled the project and they froze. Why did they freeze? They were waiting for STDIN input to move forward with a specific tests. This was no good, so I started handrolling mocks and fixing the state of my tests. After I felt comfortable with the tests in place I moved towards refactoring.
I've been thinking about a way of metaphorically naming my server components at a high level. One idea I like is to use business metaphors, as if the server were an office. The forward facing ServerSocket piece would be the 'Receptionist.' The receptionist would pass the socket off to a 'Middleman,' which I was originally calling the Dispatcher. The middleman would then pass off to the 'CEO.' The CEO metaphor probably won't stick, I don't like it. The CEO is currently the high level thought relating to the interface to the business logic that is ignorant of the framework. The naming scheme is a work in progress.
Monday, January 30, 2012
Clojure Code in Java
Let's look at a simple example.
What we have available now is a package test with class Speak. We can instantiate an instance of speak. We can call hello("my name") on our instance and I'm sure you can guess what happens. One caveat to notice is that we must prepend our function names with '-' and we must declare our function's in the top gen-class block. We can then get a little fancier. Here's the same example with a global variable.
Now we can set the name on our instance before we call the speak function. Notice the :state declaration in the top block. We can only have one state variable, so use it wisely. Here, I intend to only use it to hold the name to say hello to. A good pattern is to use the state option to store a map of all the variables one might need since we are limited to one state variable.
Day In Review
The second half of the day was dedicated to refactoring. The new, edit, and view pages for our models were all basic and contained many repeating portions. In order to stay DRY and to help keep the system simple we started refactoring to collapse the new, edit, and view pages of our models into one page that then tweaks itself dependent on a parameter corresponding to a action which is passed down. I liked this refactoring since it meant we had a pattern to follow as more and more models are introduced to the system.
Another refactoring we had was to collapse the Interactor's update and create functions into one save function. Again, these two functions contained a lot of the same lines and in order to stay DRY we collapsed them upon each other. This change was fairly simple since we need to create if the model does not have an id and update if it does. This was the only different in the two functions.
Sunday, January 29, 2012
Using External Dependencies For Specific Use Cases
ActionMailer is a gem most Rails programmers are familiar with. For a story I was completing I needed to email notifications and I knew I was not going to implement a mailing system in a reasonable time period. I then introduced ActionMailer and wrapped the functionality. I had the system under test and I felt pretty confident with what I was using ActionMailer for. My tests were green. The code then went live and I quickly learned that I had made a big mistake. I started receiving ActionMailer generated exceptions. What had happend? I was green when I had committed!
Well, I was green and it was a false positive. ActionMailer has a different set of rules for test and for production. In my case I leaned on ActionMailer and got burnt. I took an array of email addresses and joined them by commas to produce a string to be used as the recipient list of my generated mail. When there were no array entries this produced an empty string. When I sent an empty string to ActionMailer in test it essentially disregards it. Great, I thought, ActionMailer handles my empty email list case! WRONG!
In production an empty string as a recipient list with ActionMailer produces an exception. When the operation is fairly common it produces a lot of exceptions. It was my fault for not narrowing my usage of ActionMailer to the specific cases in which it was actually needed. I instead used it for my empty string case. The moral of story, external dependencies are for specific use cases and nothing more.
Friday, January 27, 2012
Day In Review
Specifically, Myles and I were able to have the iteration new/edit form working as a modal. This keeps the workflow contained on the storyboard page, which was the goal of our work. The changes actually pointed us towards a missing abstraction and implementing the abstraction for iteration presentation felt good. It made the change feel painless and allowed the existing code to stay largely in tact.
The second half of the day we revisited the storyboard column sorting bug. We fixed the bug, however we introduced redundant behavior (sorting when it is unnecessary) and are having trouble cutting this behavior out.
Wednesday, January 25, 2012
Day In Review
The second half of the day I dedicated to hosemonster and the graphing functionality. I have been spiking out ways to generate curve fitting functions given a set of data points. In the morning I, out of curiosity, entered a sin function and got a jagged heart-monitor looking output. I hypothesized that calculating more points would smooth the line and it did, I had a continuous sin function. This is when I got really excited, what a cool thing to have happen.
The next problem was figuring out how to calculate the form fitting function. The solution involves creating an arbitrary length polynomial whose degree is determined by the number of points supplied. To create the polynomial involves the use of Gaussian Elimination to solve for the coefficients of the function. Wai Lee and I were able to get a polynomial class under test, use the test data provided, plug in the polynomial generator, and create form fitting curves for data points.
Tuesday, January 24, 2012
Clojure Map to Ruby Hash Kata
Saturday, January 21, 2012
Day In Review
I had gone from the highest level, an incoming HTTP request to the lowest level, talking to the database, all in one controller method. Instead of doing this, I pulled the behavioral code into an interactor that mapped to an existing model. This turned out to be great for the codebase because when the interactor first came together I began to notice responsibilities that lived hodge-podge in the system that ought to be shifted to this new abstraction. It was like taking a weight off the shoulders of many system components and helped reduce code duplication, which is always a win.
Tuesday, January 17, 2012
Clojure records, types, and protocols
Monday, January 16, 2012
Day In Review
Sunday, January 15, 2012
Day In Review
Thursday, January 12, 2012
Namespace Your Javascript
Day In Review
Tuesday, January 10, 2012
Day In Review
Monday, January 9, 2012
Day In Review
The second portion of the day I spent working on the Clojure Koans. Thankfully, I have had the opportunity to pair briefly with Myles on Metis, an ORM for Clojure. Unfortunately, event after pairing, Clojure still feels a bit weird to me. I'm purposefully moving slowly and constantly referencing the documentation to try and get a good feel for Clojure.