Programming 365

In 2012, I spent a year taking a picture every day. I was cajoled, sweet-talked and tortured into this by one of my brothers. Around the web, this is commonly known as a project 365 (see mine here, but there are loads of truly stunning ones out there – including my brother’s).

In 2014, I want to do a different P-365 – programming 365. The idea is to write some code every day and make it available publicly. I’ll be committing mine to my github account. For an explanation of what github is, please click here.

The “rules” are simple:

  • Write some code every day
  • Make it available publicly

For the photography project I found that the project forced me to explore lots of different subjects, as well as experiment with techniques. It also really helped me master (or at least understand) the DSLR I had bought at the start of that year. I am hoping for something similar with “programming 365”.

So…. what kind of code am I going to write?

The honest answer is that I haven’t got a clue. It’s going to range from exploration (new languages, compilers, development environments, mobile/embedded, databases etc.) to techniques (TDD, BDD, pair programming). The individual days may well be completely stand-alone (“hello world in ruby”) or part of a theme (e.g. “Python” or “NoSQL databases”). I might even have a stab at a larger application. After all, a year is a long time…

What if I don’t have access to a computer?

Hopefully this won’t be too much of a problem with the amazing all-round development toolkit that is the surface. However, for days when I don’t have a computer I’m going to hand-write code (or at least pseudo-code) on paper and transfer it into machine form later on. I’ll also post the original handwritten stuff online for a laugh.

Have a go too

Part of what has made the project so much fun in the past is to do it together with other people. See what they do, comment on it and learn from it. “Enjoy the journey together” (for lack of a less cheesy phrase). If you want to have a go at this slightly weird experiment, please let me know. I’ll be starting on the 6/1/2014 and will be posting weekly updates on what I coded here (as well as the uploads over on github). Of course, you don’t need to begin at exactly the same time: Join whenever you fancy some fun programming! :o)

Advertisements

Javascript on Devices #1 – 17/12/13

On Tuesday was the first meetup of a new meetup group: It’s all about running javascript on devices (e.g. the raspberry pi). In the past this would have been quite an usual meetup. Traditionally, embedded devices have always relied on native code to squeeze the most out of the computing power available on the device. However, javascript definitely seems to be the cool kid on the block and its community has leaps and bounds of energy for innovation.

For a native developer with (a little more than a) passing interest in web development this kind of meetup is a big culture shock. The technologies and tools bear a passing resemblance to what I’m used to in my traditional (mainly desktop-)client-server world, but everything feels a little foreign. Maybe because of that it also feels fresher and more exciting – like they are really pushing the envelope. Definitely lots of new stuff to research and think about (for me)!

For guys who do a lot of work with javascript or are familiar with embedded devices, these introductory talks may have been a little basic, but for me they were perfect. There were two talks:

  • A talk about the internet of things by Vlad
  • A talk about pushing javascript on to embedded devices with a simple git-push by Alex

The internet of things

Vlad’s presentation was split into two parts. He first talked about how with devices we currently have more of an intranet of things and the potential of moving towards the internet of things. He then spoke a little about the setup of his startup and how its work helps in moving towards the internet of things.

The intranet of things was Vlad’s metaphor for describing the current state of proprietary networks, protocol, standards and APIs that are prevalent in the embedded devices community. He argued that a move towards a REST & HTTP-based internet of things would bring big benefits to embedded devices:

  • ability to transfer experience of web development
  • proven scalability
  • reduction in development cost (ease of use, ability to reuse code for similar problems)

Another benefit is of course the ability to enable different devices to communicate and interoperate with each other. In my mind, this is the key benefit because pooling enables the kind of collaboration where the sum is greater than the parts: It is the kind of thing that turns a nicer development environment into a true platform on which higher level applications can be built:

For example, one of the things that Vlad mentioned is that we won’t need to physically search for our shoes in the morning and instead we will Google them. However, what if we could rely on a preference-based algorithm to physically select our entire wardrobe in the morning (entailing collaboration between lots of different individual units of clothing)?

He also talked a little about what may be necessary for this move towards HTTP (ADIs etc.)

The second part of his talk was more focussed on his startup‘s work towards this. This work has been to write REST APIs for particular projects as well as providing an infrastructure around these APIs. Interactions with the devices are supported at various levels of sophistication (QR code; RFID tag; sensors or raspberry-pi-like machines).

One of the interesting points was that his main customers are big companies who are looking to protect their (and their client’s data). Therefore, the normal deployment option is as a private cloud. Unfortunately, data security, identity and sharing are of course key concerns, when establishing a platform as described above. Private cloud-like deployment may limit potential.

In my mind, the main benefit is therefore the reduction of development cost and enabling a wider community to offer services for companies within their private cloud.

Deploying (javascript) more easily on to embedded devices

Alex’s talk had three parts to it:

  • Rise of javascript as a language and rise of devices

    This part talked about all the bits that you might expect from a young development community brimming with self-confidence: Describing milestones in the development of the language as Alex sees it (from original invention to use in gmail, to “javascript the good parts” to node.js to phonegap to nodecopter), showing us benchmarks in javascript execution times through time etc.

    Alex also described the evolution of devices, how devices are getting much more powerful and how we are reaching the point where squeezing performance for every clock cycle is no longer as important.

  • Using javascript for displays on bins

    The second part of the talk then described how Alex and his team first got involved in embedded javascript development. This was with a project for displaying ads on bins. The team here worked mainly on displaying ad content to end users. As part of this, they had to solve several interesting problems such securely interacting with the machines, finding a way of remotely updating content as well as underlying software, dealing with network reliability issues etc.

  • Use the experience gained for resin.io startup

    In the final part of his presentation, Alex described how this experience led them to develop an infrastructure that makes it easy to deploy javascript to devices. The service was compared in type to heroku (which I hadn’t heard of before) which offer a similar service for web applications. (I.e. they offer the ability to deploy to a hosted environment with a simple git push). Alex also described the technology stack they put together for interfacing with devices.Unfortunately, I can remember only some small parts of the infrastructure – in particular: docker and openVPN.

Altogether I found Alex’s talk very interesting and informative – with lots of pointers in new directions where I’d like to learn more!

London C++ User-Group-Meeting – 13/12/2013

Last Thursday was my first time at London’s C++ UGM. This is a fairly small and young group. It looks like it has only just been going (maybe four or five meetups so far). Having said that, the topics are obviously very relevant to me (and everyone at work) so I’m going to try and cajole some more people into going. Also: I’m sick and tired of being dominated by a crowd of Java programmers or web/mobile guys wherever I go. C++ can do cool stuff too (yes – honestly it can!).

The talk itself was about a testing framework called Catch originally developed by Phil Nash. TDD seems to be very much a hot topic at the moment in software circles. A large part of the change towards TDD is cultural. This means discussions about changing behaviour, processes are quite important, buy-in from product owners etc.

HOWEVER, that is not to say that the technical side of it is unimportant and unit testing in C++ still feels a little awkward to me. So new ideas in a new testing framework are definitely welcome. (Though there is hardly a dearth of frameworks out there….)

And interesting and good ideas this framework definitely has. I’ll just quickly go through the ones I picked up during the talk in bullet points (I haven’t actually tried it yet, so can’t actually share my own experience):

  • It seems to be really lightweight and easy to get started. All it needs is an include of a single header and a #define in one file so that it generates the main for you.
  • I really like the way that test asserts work. Instead of the standard EXPECT_EQ(actual, expected) (<—- Google syntax) where you are never sure which one was actual and which one was expected, it uses style similar to normal asserts i.e. REQUIRE(add(3,1) == 4). I also like the output this produces when this test fails: There is some very clever template matching going on here to produce both the actual numerical values as well as the original line of code triggering the failure.
  • Instead of fixtures, it introduces sections. These are ways of splitting one test into several sub-sections, all of which will be run (and set up) independently. While it is quite heavy on macros, this gives the test case code a nice coherent feel.
  • It works particularly well when used in conjunction with the “given when then” framework for setting up your test cases. It looks like this makes it very easy to express the intention of a test case beyond just using the name of the test case. Since this is one of the biggest challenges in reading/interpreting tests, anything which helps here is particularly welcome!

Altogether some interesting stuff here. I’ll definitely give Catch a shot for some of my projects at home and will report back after.

London Software Craftsmanship Community Roundtable – 10/12/13

On Tuesday, I went to YALSCCRTE (yet another London Software Craftsmanship Community round table evening). I described the format in an earlier post so without further ado, the below are some notes on the evening.

Lightning talks

Tribal leadership
This was a talk (slides here) introducing us to some of the concepts behind a book (“Tribal Leadership”) which analyses group dynamics and relationships in teams and organisations. Based on the analysis, they seem to have arrived at CMMI-like (i.e. “it’s got stages”) framework for categorising team behaviour. Quite an interesting talk. This definitely fairly relevant to Agile software development and the principles they have about distributed responsibility. The book definitely made it on to my reading list – so I’ll give it a full review when I get round to reading it.

Three types of leadership
Seems like it was an evening to talk about leadership stuff. This talk was more about the types of leadership necessary in a software project. It postulated that there were three different kinds of leadership: technical architecture, project management and team leadership. The talk also went a little bit into different ways of organising this (individual vs distributed responsibility). This very much tied in with the talk about tribal leadership. (Maybe distributed responsibility leads to a greater sense of team engagement and collaboration?!)

A Benefit, Complexity and Danger approach to estimation
This talk was about enriching traditional estimation methods with measures of benefits, complexity and danger. It is described in a fair amount of detail here. This sounds like quite an interesting approach – especially to facilitate communication between product owners and developers. Ultimately, I think you would also still need to converge on an effort score because that is what the product owner ultimately cares about.

Advanced Infrastructure for Continuous Delivery
This was a talk about the importance of establishing feedback loops in your development effort and how important a continuous delivery system (builds, tests, deployments) is to such a feedback loop. It posed the question of why a remote team called “IT” might be in charge of that infrastructure and suggested that one might be able to use development methodology for dev teams to administer infrastructure themselves.

How do you deal with a horrendous codebase & resistance to change?
This talk posed the above question to the group and facilitated a little bit of a discussion on the topic. For me, personally, the single most helpful comment was “always leave the codebase a little bit cleaner than before you touched it”. This is such a true statement and should be our guiding principle in practice – no matter the current level of understanding or ability!

Two discussions
After these lightning talks, there were then two longer discussions:

  • Domain Driven Design – how to achieve it
  • Greenfield project – how to do it properly

I took part in the latter one. A large part of the discussion here was heavily influenced by the “Growing object oriented software, guided by tests” book I am currently reading. It’s all about BDD, walking skeletons and such things. I’ll dive a little bit deeper into this subject matter, when it comes to reviewing the book. One of the most helpful ideas during the discussion (which is not in the book) was the idea to write a “shitty prototype” to get familiar with the subject matter and entities and to then THROW THE PROTOTYPE AWAY and do it properly (as they said in the discussion, the important thing is the throwing away here…)

Summary

Altogether this evening felt a little bit less “shiny and new” than the last (first) one that I went to. Of course that is to be expected. I guess you just know the format, have met the people before etc.
Having said that, the discussions were still very interesting and very much worthwhile. It’s great to spend an evening sharing ideas in a fairly free-format way with lots of passionate devs.

Multi-Threading: My study plan

One of the things I’m trying to understand better is multi-threading. This is one of those concepts where you can go as deep down the rabbit hole as you have an appetite to go down.

I have always felt that my multi-threading knowledge is “ok” for my daily needs:
I understand the normal synchronization problems with multi-threading (deadlocks; race conditions). I have an understanding of the mechanics of locks and mutexes. I have some appreciation of Windows threads – syntax, how they are created/executed with the main program I work on, how to debug them etc.
However, those things have left me uneasy that I don’t really know what is going on.

For some time, I have therefore wanted to go back to basics and build up a better understanding of multi-threading from the bottom up. My current study plan looks as follows:

  1. Better understand synchronization concepts and classic synchronization problems in the abstract

    So you know about locks & mutexes… What about semaphores and condition variables? How do atomics fit into this? What about lock-free programming? So you know about deadlocks and race conditions… What about starvation? How do you solve typical synchronization problems? (E.g. dining philosophers; readers-writers problem)

    At the moment, my plan is to devour Downey’s Little Book of Semaphores for this step: take a whole lot of notes, solve the exercises in the book and then google all of the concepts I come across I am not familiar with.

  2. Understand language specific threading concepts and implementations

    After understanding threading concepts in the abstract, it’s important to actually get round to do some implementing (think: “learning by coding” or some similar non-sense). To do that, I first need to understand what concepts languages offer. For now, I’m planning to start with the language I am most comfortable in (C++). Here, the standard book on the topic (incorporating C++ 11 threading details) seems to be Williams’ C++ Concurrency in Action. For C#, I was planning on looking through Albahari’s free ebook on threading.

    (While the C++ book comes with lots of positive reviews, I haven’t really found anything on this. So this is taking a little bit of a chance.)

    For javascript, I was only planning to have a little bit of a google around.

    (I imagine there will be a fair number of differences here depending on what execution environment we are talking about…)

  3. Implement the problems described in Downey’s book

    After that, it’ll be time to actually get my teeth into implementing the problems I have looked and solved at in the abstract in code.

Given that I am also planning on spending a fair amount of time on software processes, TDD and such Agile rubbish, I think getting through this list may well take till the end of January or so. I’ll add some posts on how I am getting on with this.

Review: “How Google Tests Software” by James Whittaker, Jason Arbon and Jeff Carollo

(Goodreads link here)

I really wanted to like this book. I did. Unfortunately, the reality is a little bit more complex than that. On the one hand, there are lots of really cool, useful ideas in here. On the other hand, I am not really enamoured with the style. It’s self-congratulatory and at the same time weirdly self-conscious. Finally, I think it tries to do too much in one book.

Let me tackle these points in reverse order (that way we end up with a nice set of useful ideas I picked up right at the end of the post!).

Too much for one book to shoulder

This book tries to do an awful lot. It tries to describe the various roles at Google that testers have. These are the main chapter in the book. Around this structure, it tries to fit in lots of other things: introduce people to the culture of testing at Google, describe software testing techniques (e.g. ACC, test-tours), interview questions and advice, some important projects (e.g. test certification), some tools used during the testing (e.g. Selenium Web-Driver, record-playback tools, ) and a view on the future of testing at Google. Whoa – that is a lot of stuff! And it shows. For me, it really strained the original structure in which everything is meant to fit in. Maybe this is due to me reading the book on the Kindle, but I constantly found myself surprised by the next bit of content, unsure of how things were fitting together. Really good, well thought-through parts were followed by “rather meh” parts.

Altogether, this book is a little bit like an intro course to Google testing, where the instructors try to cram all of the important bits in, and then a couple of important bits more, and then some…

Some notes on style and stuff

Normally, I don’t mind too much about style, but here there are some bits worth pointing out about this book, which I found rather odd: I have not really seen an interview with the author and co-authors itself in a book. It also talks a lot about how Google is leading the way in software testing. In a book about Google’s software testing by Google employees, I think that is a little weird.

Anyway, here are my two cents on Google’s test practices: I think it is worth appreciating that they are leading the way on some levels (e.g. a lot of their tools and some processes/practise look pretty awesome), but on other levels I am not too sure: For example, the division of roles between SWE, SET and TE to me felt artificial (even after a long time in the book is spent justifying it).

Essentially, the process they have gone through is…

  • From a large company where there used to be little enthusiasm for automated testing / unit testing and a traditional division between testers and developers

and transformed this towards a more Agile focussed practice where

  • the roles in the team are much more fluid (i.e. developers take responsibilities for testing) and automated testing is held in high regard

by

  • throwing a large amount of money at the system and using a specialist SET role as a crutch to show “normal developers” the importance of caring about quality

The “future of testing” as they describe it in the last chapter, then, is a future where they can start to throw away the crutch and get rid of the SET role.

The good bits

Enough with the book bashing. I did learn a lot from it after all. Here are the good bits.

  • General spirit

    While I don’t like the tone of the book, I do really like the spirit of the testing culture at Google: Make quality a feature of the main system and make everyone (especially devs) responsible for it. Automate as much of the testing as you possibly can because it scales best.

  • Test certifying a project

    I also like the process of test certification that the book describes (details on this can also be found here): Effectively, it is a way of giving projects within Google a measure of progress how they are doing with their testing. It also gives them a set of steps of what they can tackle next on their way to testing mastery. Finally, it drives visibility of effort across the organisation.

  • Test planning – the Attribute, Component and Capability (ACC) approach

    It’s been a little while since I wrote my last test plan (start of 2008 maybe?) and I know test case design processes have changed quite a bit (with the adoption of Agile). This seems an interesting way of minimising test design. What I particularly like is the way the it gives you a type of heat map where you may want to focus your testing efforts. Here is a short introduction to the topic.

  • This is a tool that I have been meaning to check out. I heard about it before for writing tests for the web. Now that I do a little more with javascript and stuff in my spare time, maybe it is time to have a closer look again and work some web TDD magic…

  • Layers of automation – small, medium and large (unit, integration and system tests)

    Off the back of this, I also finally understood the importance of different layers of automation. To be fair, this is not really the benefit of this book, but rather of a book I’ve been reading at the same time. Having said that, the layering into small, medium and large and their guideline ratios of 70-20-10 now make a whole lot sense!