Some visual notes on the book in a vague “impact mapping” style. A more formal review may follow. :o)
I have had a couple of unproductive weeks in terms of blog content, but it’s been very busy with just about everything else.
So at the very least I thought I’d put together a little project 365 update. (Yes – it’s still going!)
The highlights of the last three weeks are:
- I finished off the robot framework integration with teststack.white, using the remote library
- I started a little mobile (/Appbuilder/Icenium) project with a fellow p365-er, Owen. (You can find his blog here)
- I worked through chapter 2 of “Modern C++ Programming with TDD”
- My machine was de-java-flowered
The lowlight was that I actually missed a day of coding for the very first time: I was watching a film festival and just didn’t make it back home before midnight to spin up the machine. Oh well – it was bound to happen sooner or later.
Let’s forget about the lowlight and take a quick look at the highlights:
Robot Framework & TestStack.White
I already wrote a little bit about this in my last project 365 update: All I’m going to say is: I got it working in the end – yay!
Telerik’s Appbuilder / Icenium
“Modern C++ Programming with TDD”
Unsurprisingly, (given the title) this book is all about TDD in C++. I’ll write a full review for this in time. For now (whatever else may come later in the book), I have really enjoyed working through chapter two. Anyone looking to start applying or perhaps struggling with TDD in the C++ world should read this book.
First steps with Java
Finally, ahead of a company trip last week (to a place where they mainly speak Java), I installed both the Java SDK as well as the runtime and (uh oh) eclipse. I also started working my way through the Java First book as well as some Java for C++ / .NET guys tutorials. Having said that, I am only just starting with these so it’s all very basic “Hello World” stuff still.
Now I am off to adding a couple more unit tests tonight to the Soundex solution. What better way to spend a Sunday evening!
Admittedly, I only stayed for about half of this Meetup. I left before the discussion started. (This is because the pizza delivery was a little late and we didn’t start until the pizza had been delivered). However, I did manage to catch the lightning talks and, as always, these were excellent.
The topics of the lightning talks this time were:
Code retreat day at work
I think I have described the mechanics of a code retreat day before. This was a little bit of feedback by some of the guys who tried a code retreat at work. They reported that the team they tried this with was reluctant to delete code and not as able to experiment with different solutions: It sounded as if the team was not too sure whether they were being graded on how they performed. I can imagine that it’s very hard in a company, which does not do this kind of thing very often to just see something like the code retreat as just a fun exercise! It’s also very hard work to establish in which everyone is just willing to experiment and be creative!
Advice for the aspiring devop
This talk very strongly echoed an excellent IASA talk I went to the other day, but did not get round to writing about. (You can find the slides here.) It talked about the importance of keeping production in mind. In particular, it talked about how you favour cleaner, simpler and more robust solutions (as well as the importance of logging and diagnostics) if you keep production in mind. One of the books that was mentioned – “release it!” – was also mentioned in the IASA talk so I really need to check this out.
A quick introduction to auto fixture
This was a really short introduction to a unit testing library called “auto fixture”. The main concept behind auto fixture is to reduce the amount of setup code that is needed by creating anonymous variables / auto-creating all of the stuff that you need to get the code to compile, but are not particularly interested in for the purpose of the test. Really interesting and had not heard of it before. Maybe something to check out as part of the project 365!
Rules for effective teams
This talk was about how teams communicate effectively. You can find the full PDF here. Communication among a team is definitely a super-interesting subject and I really want to dig a little bit deeper into this. Some of the teaching and coaching stuff I got exposed to through my mom (habits of highly effective people type stuff) is really, really interesting and really blows my hair back.
Software gardener vs software engineer as a metaphor
This talk introduced the metaphor of software gardening. You can find the full post here. Personally, I am not too sure about this particular metaphor. I think that craftsmanship is actually much closer (and in my mind craftsmanship sits somewhere between gardener and engineer). I also think that metaphors are a way of getting a particular point across and people then get too hung up about the specifics of them. Having said all of that, of course I love the imagery of tending to gardens.
Big numbers – small numbers ….where approximations matter
This was a talk about how measurement accuracy (as well as floating point arithmetic) may matter a lot in your program and how numbers (e.g. if they are unexpectedly close) may turn out to be negative even though they should not be. This was a good reminder of coding (and testing) for the unexpected code path.
(London .NET User Group – 24/02/14)
On Monday, Gojko spoke a little about his experience with continuous delivery and how continuous delivery can have a transformational impact on the way that you do business.
If I had to boil down this talk to two key ideas, they would be to
- have feedback loops in your software wherever you can
- use shipping/deployment as (what Econometricians would call) a “natural experiment”
These are of course fairly well-worn ideas (the Google spellchecker is a classic big data feedback loop example). Natural experiments were apparently used in the 19th century (Wikipedia to the rescue).
Still, there were a lot of interesting specifics here and Gojko himself summarized the main points he made as:
Don’t surprise users
Users still need to be able to perform their tasks easily: They should be allowed to gradually opt in to new functionality (the example here is the classic google path of announcement, opt-in, standard with opt-out, dropping the opt-out).
However, beyond this users should also expect UI consistency. You should not be shocked into new versions. On UI design in general, he mentioned a very interesting-sounding book called “Usable Usability” which I hope to have a look at (in the fullness of time. There is just too much reading to do!).
Don’t interrupt sessions
Here, he used the examples of Adobe reader and Windows updates which both force you to update at a time that is convenient for them and not for the end user. In particular, he explained how continuous delivery for websites here has a strong effect on multi-versioning: If you do do continuous delivery, you will need to support multi-versioning or you will break existing users sessions.
Both of the first points deal with a funny (but I think common) scenario where there is technical capability to have continuous delivery, but no business desire for it. What you end up with is, as he put it (using the Scumbag-Steve meme), “continuous delivery to staging”. This is probably worse than no continuous delivery at all!
Start at the top and build up the backend
He called this the “skeleton on crutches” after the slightly more famous “walking skeleton“. Again, the examples he used where website based where he implemented the UI first and implemented an extremely thin Email-based-backend of a feedback button using Jotform. Personally, I think the difference here to a walking skeleton is marginal, but I guess the emphasis is on delivering user value quickly
Learn from shipping
One of the most interesting points, here, he made was on the importance of measuring achieved user value (and how this feeds into future development):
If nobody presses the feedback button, then you get rid of it rather than investing in an expensive backend system.
For more enterprise-y systems he recommended having KPIs (“e.g. minimize time taken by user to order something”) for what you want to achieve in general. Scoring each user story against the KPIs (“makes task x 20 seconds quicker to achieve”) and then measuring (in production!) against these. This kind of feedback loop allows you to see whether what you did is actually worthwhile and where you should expend effort in the future.
All in all a very worthwhile talk!
(The Thalesians – 26/02/14)
On Wednesday, I went to my first Thalesians meetup in quite a while. In terms of outside work learning, I have recently focussed a lot on the technology side and let finance be finance. I keenly felt this, when I went to this meetup. For one thing, they had moved venues (at least the Imperial math-finance seminar is still in room 139 in the Huxley building. I don’t know what I would do if they ever moved!). More importantly, though I felt that I was familiar with the basics of the topics discussed this evening (e.g. yield curve reforecasting and discounting, spread between Libor and more collateralised curves and how this affects discounting) as well as the basic interest rate models (Hull-white, Libor market model etc.), I clearly did not recognise some of the more recent models mentioned in the talk. Time to swot up on these – me thinks!
Interestingly, though, Chia talked about where it is important to know these new models (i.e. what circumstances they really need to be used) and where they may not make a material difference: The talk was a very interesting meta-discussion about model accuracy and uncertainty.
His underlying argument is that model simplicity helps us by enabling our intuition. Using a more complex model, we may not be able to verify more than “all the inputs look about right” and “using those inputs, yes we do get a number back”.
The physics metaphor here is the fairly obvious one between Newtonian physics and Relativistic physics (not that I know anything about it :o) ). However, there are circumstances when the simpler one really works rather well (“normal life”) and circumstances where you really need to use the other one (“travelling close the speed of light”).
Chai explored this topic in detail giving a variety of examples of when a more integrated approach is necessary (e.g. “long maturity”, “transformational trades”, “wide choice of collateral – including currency” etc.) and scenarios where it is not (“short maturities”, “exchange trading”, “CSA supported trades”).
In short – very enjoyable!
It’s been a while: I took some time out of this whole blogging malarkey. Things were just insanely busy: I got quite close to posting stuff a couple of times, but it just didn’t quite happen. Today, I’m hoping to catch up on a number of posts.
First up is a quick update of how the project 365 is going – because, yes, it is still going!
Having said that, I broke my github streak at the start of the weekend. I stupidly messed up my committing somehow. No matter – I still coded and intended to publish. (At least that’s what I’m telling myself).
What kind of stuff have I been looking at?
In week 5 (when I last posted), I was running through “C++ – Concurrency In Action”. I then spent another week doing “more of the same”. I have now caught up with where I am in the book and it is kind of the next book to finish. Hopefully, I’ll have some time in a couple of weeks to dig a little deeper. (There are just so many different things to look at!)
Over the last two weeks, I have been playing with acceptance testing. In particular, I have been trying Robot Framework (I may give Cucumber a go in the future, but technologically Robot Framework is a little closer to home).
I played around with the different test case formats (html, tsv, plain text) and experienced the benefits of a keyword driven approach to testing (wider appeal: non-developers can write tests too).
The end goal with this little sub-project is to write a library for .NET UI Automation framework based acceptance testing:
Yes, I know there are a number of downsides to UI based acceptance testing, but I do think that there is a place for them. (A good Thoughtworks article explaining the downsides can be found here. In my opinion, Robot Framework will enable you to write a layer of tests for the latter phases even if lower level keywords are UI based and therefore more brittle than they might otherwise be.)
Yes, I also know that Robot framework comes with an existing UI automation framework. However, this has very much been a “get to know the technology” type project.
So how far have I actually come towards this goal?
After exploring some Robot Framework basics, I played around a little with the .NET IronPython integration. Now all I needed was a .NET UI framework: I didn’t want to go down to the low level of the built in .NET UI Automation framework and (in my private life) I don’t quite have the deep pockets to shell out for Coded UI (VS Premium/Ultimate), so I settled on (and played around with) TestStack.White. This has been pretty impressive product and very to easy to get to work.
One of the nice things of Robot framework is that it offers two types of integration with .NET: One is through IronPython (which very handily and completely by coincidence made use of my newly acquired Python skills). The other is through the Robot Framework Remote library (which allows for an XML-RPC based web service: Generic host servers are available in a variety of languages including .NET. These then host keyword libraries in the respective language. Niftily, this also allows for test case distribution among different machines).
So I played around with the .NET host server (nrobotremote) as well as the Ranorex library written by the same guy. (Both are pretty impressive – although I found the documentation a little hard going and had to debug through a load of stuff to get this to work.)
Essentially, in the end, I butchered his Ranorex library and turned it into a basic TestStack.White library. This is now ….almost…. working. (The words “tantalizingly” and “close” spring to mind).
….and this is (roughly) where I am as of day 56!
There is not much point in going through this week in detail. All of the week was focussed on catching up with where I am in C++ Concurrency in Action. I have almost gotten there now. One more chapter to go and I am back where I was …..well… two months ago.
Having made all of this sound quite negative, it’s actually been quite good fun to write these out again and it’s been a good learning experience. I just wish that it was actually new stuff I was covering and not revision. Oh well. One can dream.
Hopefully, I can get past this quickly and move on to some exciting stuff. At the moment I am thinking about how to interact with a C# and C++ code base from the robot framework. I think this might be the next little project to tackle.