Wednesday, February 29, 2012
Day In Review
Tuesday, February 28, 2012
Day In Review
Monday, February 27, 2012
Ruby's Functional Programming
Ruby is a fully object oriented language and is, in fact, so dedicated to objects that every entity in the language is an object. After hearing that statement it might seem a little weird that ruby also has functional support. Ruby's functional aspects are powerful and complete which makes it worthwhile to learn. We can invoke operations such as reduce and filter with the use of code blocks. We can also pass functions around directly with the use of lambda and Proc objects. In this post I'll show you some of the more powerful tricks, show their potential pitfalls, and document the experience with code to play with.
Each method call can be given an additional code block. If you don't believe me then put a block on the end of every method call that currently does not have one and watch as (almost) everything still works as intended (please don't really do that). I'll introduce code to show this, but first let's see a straight forward imaginary workflow:
Now get_name just returns the name attribute of an object, as you might guess. If we wanted to use this to get the name of an employee we would first find the employee and then pass it in here as a parameter as we have done above. Let's show the same workflow, but this time let's allow the code block to give use the name:
Cool right? We can build any employee interaction we want off of get_employee using code blocks. This is an imaginary case, however, and code blocks aren't always the best option so use them wisely.
Code blocks are a part of some of the standard library's methods. It allows us to make use of functional ideas in ruby code. For example, let's look at an inject that sums all of the elements given multiplied by two.
These are powerful expressions because the intermediary steps (i.e. the summations and a single element's multiplication by two) are stateless. We simply put data into the inject and a single answer comes out with no side effects. Other such functional actions include (but are not limited to) reduce, collect, and reject. Ruby's Enumerable has lots of functional methods.
The last functional item I want to share is the use of closures. In Ruby we can make use of lambda, Proc, and method(:name) to create closures. They all appear to be very similar, but have subtle differences. For the sake of learning we will ignore the subtle differences and use Proc to explain the concept. Procs are objects that encapsulate executable code. With a Proc we can bundle up some code and pass a Proc object around until we are ready to call it. For a simple example let's look at the following:
This should feel very similar to the code blocks we had discussed earlier. This is because code blocks are a type of closure! Think of Proc's as a code block that can be held for later use. Let's explore closures a little more:
What happened here? We invoked gen_bar in the Example::Bar object and therefore Bar.new should invoke a new Example::Bar, right? Wrong! Procs are always evaluated in their declared scope. That means that in this case the Proc was executed in the context of module Foo even though it was called in module Example. This is something to keep in mind as closures are passed between classes and modules.
Functional concepts in Ruby can make coding easier, cleaner, and more expressive. It's important to understand the concepts in order to use them correctly when a problem being faced could use a functional solution.
Day In Review
Sunday, February 26, 2012
(Retro) Day In Review
Thursday, February 23, 2012
Day In Review
Wednesday, February 22, 2012
Day In Review
Monday, February 20, 2012
Decision Deferment
Day In Review
Saturday, February 18, 2012
Day In Review
Friday, February 17, 2012
Day In Review
Wednesday, February 15, 2012
Day In Review
Monday, February 13, 2012
Day In Review
Sunday, February 12, 2012
Acceptance Testing For Student Projects
Thursday, February 9, 2012
Day In Review
Day In Review
Tuesday, February 7, 2012
Day In Review
Monday, February 6, 2012
(Retro) Day In Review
Students, Take Note
I put acceptance testing first in the list for a reason. Acceptance testing is a little bit harder to grasp than test-driven design and this is probably due to acceptance testing being at a higher level. Acceptance tests, "act to verify that the system is behaving as the customers have specified" (1). What does this mean to a student, since they have no customers? Think of it this way: as a student, you must approach assignments as if it were your job, as if your financial stability and your reputation were at stake. Your teacher is your customer. You want to make sure your customer is happy and for that reason you want acceptance tests to assure their happiness. In fact, how can you assure yourself the highest grade possible other than by proving that you deserve the highest grade possible? How can you assure your customers' happiness other than proving that their desires are fulfilled?
Of course you are saying, 'well duh I want to get the highest grade possible.' Great. Pick an acceptance testing framework suitable to your situation. Then began to translate the requirements into a high level system requirement. Think to yourself, how will the user use the system? Let's look at a quick example from when I was in school. One of my projects involved writing a Chained Hash Map implementation. We needed to be able to add, delete, and lookup within the Hash Map. Excellent. Here is an example of high level design.
I write these first since these are the features I need to have implemented. As I develop I am actively working towards completing these high level goals one step at a time. In fact, since this is a chained hashmap let's write one more test because we thought ahead
I start at the top and work my way down. Notice how these steps are agnostic to implementation. The only way they point is towards succeeding in satisfying the client, not towards low level implementation. As I progress I write the steps in to fulfill the scenarios I have laid out. Best of all, as I move from one feature to the next I can assure that I have not broken previously working functionality. When I'm done I have assured my project's completion and my professors' (customers') happiness. Be sure to read up on your framework because each specifies their scenarios differently. Also, each framework glues their scenarios to their test code differently as well.
Throughout school I can remember having the feeling of being finished with an assignment only to begin my manual testing before submission. I would click around, type some input, and read back the output and assure that it was correct. Then, I would have this feeling of horror in my stomach when a previous feature now failed. My stress level would instantly rise and I would spend an inordinate amount of time in a loop of fixing a feature to find out, with manual testing again, that I had broken my program somewhere else. Once one gets into the habit of acceptance testing that loop changes drastically, and for the better! The stress of breaking some far part of the system is mitigated by instantaneous feedback when it happens. We instantly know if some implementation of a feature was done incorrectly because our test will tell us.
The next idea I wish I had known about in college was Test-Driven Development (TDD). This is testing at a lower level than acceptance testing. TDD is the process of testing each individual modular piece of code in the system. In fact, it's not just testing the code, but testing the code before it is written. Do not worry, it's not as bizarre as it sounds. When I begin to write a new class, or a new method I first write a failing unit test. I specify, somewhat like an acceptance test's scenario, what the module should do. The unit test is agnostic to implementation, it just checks to make sure that the implementation works. I watch the test fail. Then, I implement. If my first try does not work exactly as I intended, I immediately receive feedback on what went wrong. Why is this a good idea? Where acceptance testing is to ensure the happiness of your client, test driven development is to ensure the happiness of yourself and your group. If my test passes then I am assured that this modular piece of code conforms to my automated test's standards. Writing a test first forces the implementation that passes the test to be flexible enough to run independently in a series of unit tests. This flexibility goes a long way. In fact, when I write unit tests I don't expect the flexibility to pay off initially. I expect the flexibility to pay off over time and in ways I can not yet imagine.
I might be speaking for myself, but I can remember writing monster functions as a student. When I changed one little line in a twenty line function, it would break for one case and not all of the other cases. The changes always seemed simple enough, but they never were. If I had written tests I would immediately know what had broken and where. A suite of unit tests is excellent at pinpointing where the code changes went wrong. Couple unit tests with acceptance tests, and software's reliability and flexibility increase, and as reliability and flexibility increase, your stress level goes down since the development process starts to contain less bugs and less surprises.
The Single Responsibility Principle (SRP) should be thought of as going hand in hand with TDD. SRP is part of the SOLID principles, but I want to talk about this one principle in isolation. As a student one problem I had with code, that I can now reflect on, was large unwieldily methods and classes with far too many responsibilities. It is important to let TDD drive SRP in your code. When you write a new test, ask yourself, is this testing one responsibility? If not, how can I rethink my design to do so? The flexibility afforded by TDD is multiplied by SRP. When we divide responsibilities fairly, one per module, we do not hide side effects in our code. We explicitly handle our use cases purposefully. When we do not hide side effects we do not have surprises lying in wait. When we use TDD we can point to a single responsibility that has broken down in our system and know with confidence that the broken test is only doing one thing, it only has one responsibility.
SRP is another tenet of flexible design. When we break apart our responsibilities we allow ourselves to change any one use case rather easily. We isolate the responsibility to change. We find our test for the responsibility and change the assertions to match our new use case. We then watch as the test fails. We find the corresponding module to change. We know that the use case is safe to change because it is isolated. We pass our test. We run our unit test suite and watch as they pass. We run our acceptance test suite and watch as they pass. All the while we can think about the surprises and stress we had avoided.
1) Martin, Robert C. Agile Software Development. New Jersey: Pearson Education, 2003. Pg 13.
Day In Review
After the IPM I attacked a bug in the PDF generation for Work Orders. When the app split into two apps (Hosemonster and Limelight-Hosemonster-UI) there was a weird break in the templating for the PDF generator. I felt accomplished when I tracked the bug down because I was able to restore the previous functionality and remove a superflous function from the namespace. The solution felt cleaner and I was proud of that.
I also paired with Wei Lee on a big of refactoring in preparation for plugging in the graphs we had been working on for quite some time.
Friday, February 3, 2012
Day In Review
I've also run into some trouble. Before, I kept my in memory data store on the top level of the server. Now, I need to find a different way to persist the state of Tic-Tac-Toe games since the top level is agnostic to the implementation and can no longer hold the data store. I've been thinking over the possibilities, but it's hard to choose something to refactor towards. Once I have my Packet workflow refactored I will be tackling this decision.
Wednesday, February 1, 2012
Day In Review
One atrocity of the code base was the status of my test. I ran the tests when I first pulled the project and they froze. Why did they freeze? They were waiting for STDIN input to move forward with a specific tests. This was no good, so I started handrolling mocks and fixing the state of my tests. After I felt comfortable with the tests in place I moved towards refactoring.
I've been thinking about a way of metaphorically naming my server components at a high level. One idea I like is to use business metaphors, as if the server were an office. The forward facing ServerSocket piece would be the 'Receptionist.' The receptionist would pass the socket off to a 'Middleman,' which I was originally calling the Dispatcher. The middleman would then pass off to the 'CEO.' The CEO metaphor probably won't stick, I don't like it. The CEO is currently the high level thought relating to the interface to the business logic that is ignorant of the framework. The naming scheme is a work in progress.