Tuesday, 26 February 2013

Testing Strategy part 2 - Unit tests

Following on from looking at code coverage and the central question of "How can you demonstrate that your tests are correct?" I'd like to consider another aspect of bayesian evidence - the tests themselves.

Taking the viewpoint of strong evidence and weak evidence again, I'd like to examine what level of evidence various tests provide. There are various types of test that are typically used in software development, each type provides a different level of granularity and a different balance of focus (developer-centric, performance-centric, business-centric, etc).

At the bottom of this stack is the most prevalent test type - unit tests. A unit test is a developer-centric test with a very fine granularity. A unit test is intended to be the most granular test, as each test should have an indivisible unit of functionality/behaviour as the subject.

Taking a bayesian perspective, I would characterise a single passing unit test as weak evidence of system correctness. Conversely, a single failing unit test is strong evidence of the system being incorrect. To understand this viewpoint, consider a typical case of hundreds of unit tests. Each test was written by a developer, usually with little or no direct input from other sources, with the aim of demonstrating that a particular unit of code works as expected.

These tests do not show that the system as a whole functions together, and they do not show that the system works as the customer expects it to. Instead, what is demonstrated by a full suite of passing unit tests is that each individual unit of behaviour works as the developer(s) expected, which I take as weak evidence of overall system correctness. A single failure in this suite of tests shows that some functionality isn't working as the developer(s) expected, a very strong piece of evidence that the system is not correct.

Where does this leave you? Taken in addition to code coverage, we now have two pieces of weak evidence for system correctness, but have yet to address the issue of test correctness. I'll attempt to move closer to this issue in future posts where I'll consider other types of test.

Further reading: If bayesian reasoning is a fairly new concept to you, I can recommend An Intuitive Explanation of Bayes' Theorem

Friday, 22 February 2013

Blogs

While newsletters and podcasts are both good ways of getting digests of weekly news, you're likely to want to find out news faster in some subjects. For this, having a good collection of blogs that you follow helps.

It's important to keep your collection well curated. If you follow too many blogs then you are likely to find yourself in the position of being unable to read and process the latest posts on them all. A good feed aggregator is useful for this purpose.

I use google reader to aggregate the blogs I follow. My collection is mostly curated for blogs relating to ruby, ruby on rails, and web programming in general. I also subscribe to the DZone feed, but this is less focussed on areas I am interested in and harder to keep up with as a lot of content gets pushed through DZone.

Good blog suggestions and curation tips are appreciated.

Monday, 18 February 2013

Testing Strategy Part 1 - Is 100% Coverage Enough?

When discussing testing, a question that sometimes comes up is "How can you demonstrate that your tests are correct?". One particular answer that I've encountered for this question is the use of tools.

Friday, 15 February 2013

Podcasts

Technical podcasts come in a variety of formats. I'm particularly a fan of the 'panel discussion' style used for Ruby Rogues and Javascript Jabber (amongst what I'm sure are many others).

These tend to provide about an hour of discussion, perfect for my morning commute. They also tend to meander through a particular topic, which allows for interesting side-discussions.

Ruby Rogues and Javascript Jabber also both have a 'picks' section, which lets the panelists plug anything they've found interesting recently, from code libraries, to books, even video games at times.

Any other suggestions for staying occupied on a long commute?

Monday, 11 February 2013

Keeping the Javascript Beast Contained, Part 2: Events

Last time, I talked about keeping containment within Javascript for DOM manipulation. Today, I want to talk about a different aspect of containment - Javascript events.

A small Javascript application can get away without much structure to their events. Events can 'float around' the application code and a developer can generally cope with understanding the interactions between separate events reasonably well.

Friday, 8 February 2013

Newsletters

It can be quite hard to keep up with the latest news in your field at times. We've been finding various technology newsletters useful to get a digest of some interesting weekly news.

Our particular specialties within web applications are nicely covered with a selection of newsletters from CooperPress:
  1. Ruby Weekly
  2. Javascript Weekly
  3. HTML5 Weekly
What do you use to keep up to date?

Monday, 4 February 2013

Keeping the Javascript Beast Contained, Part 1: DOM Manipulation

Gradient, our product, has within it several -heavy components. Each of these is structured and functions quite similarly to a document-presenter desktop UI, with a visual hierarchy, top-level messages, routing components and distinct at the leaf-nodes of the hierarchy tree.

Having an organisation like this in Javascript leads to issues of containment, ensuring that each widget's manipulation is encapsulated to just the screen area belonging to that widget. While in the future, the introduction of the in will make this potentially easier, currently the encapsulation requires attention to detail and some HTML hooks.