Craig Marvelley

Software and such

Never Out of Beta Returns!

I love podcasts. Since I moved house and gained a commute I’ve whiled away the time to and from work listening to The Talk Show, The Record, The Guardian’s Football Weekly, and a couple of other great shows. Enough that I’m barely able to listen to them all. They made me pine for Never Out Of Beta though, a podcast Gav and Carey started a couple of years ago that I and a few friends would guest on, which was great fun.

But my longing is over – Gav and I have rebooted the show, and the first episode is available now via Soundcloud!

A couple of notes:

  • Production “issues” (well, I forgot a bunch of audio equipment) meant that we had to record this one via my Macbook’s built in microphone. It turned out ok, but we’ve got higher standards which hopefully we’ll get closer to next time
  • It took longer than I’d have liked to get the episode out because I’m learning the ropes when it comes to editing. The WWDC stuff is a bit old hat now, but get past that and I think there’s some interesting discussion around the DevOps mindset and provisioning with Ansible
  • An iTunes feed is coming as soon as I can sort one out – I listen to podcasts through the Apple Podcast app so I’m eager to make it available there
  • Inspired by The Talk Show’s recent switch we’re hosting the show on SoundCloud, which has been perfect so far

As with pretty much everything else Gav and I do, it might take a few iterations before this is something we’re really happy with, and feedback to aid us in getting there will be very much appreciated.

If you like tech, you might like this. Check it out!

Handling Interactive Ansible Tasks

I recently re-ran some Ansible provisioning scripts after upgrading the base box to an Ubuntu 14.04 image and found they stalled midway through. The cause? One task involved installing the PECL mongo module, and the installation process now prompts the user to decide whether or not to build with Cyrus SASL support. I couldn’t see a way to force a decision via the PECL installer, and Ansible can’t respond to the prompt, so the provisioning process hung while awaiting an answer.

I found two ways to solve this.

I’ve been expecting you

I did a bit of Googling and after coming across this post on the Ansible forum, I took a look at expect. Expect lets you script interactions with a spawned command, using regex to match against prompt text, and send appropriate responses. It’s a sound approach; I wrote a script that looked like this:

1
2
3
4
5
6
7
8
#!/usr/bin/expect

spawn pecl install mongo

expect "Build with Cyrus SASL"
send "\r"

expect eof

And executed it on the box using Ansible’s script module:

1
2
- name: Install PECL mongo extension
  script: install_pecl_mongo.expect

Mission achieved. I felt very pleased with myself for about 30 seconds, which is how long it took for Paul to wander over and tell me there was a simpler way to do it. Expect is a really good solution for scripting varied or complex responses, but I wasn’t faced with that problem here…

Yes man

In this specific case, all I wanted to do was answer a prompt which needed a yes or no answer, and I wasn’t concerned which I went with. There’s a linux command called yes which “outputs an affirmative response, or a user-defined string of text continuously until killed”, which is just what I needed.

So now my task looks like this:

1
2
- name: Install PECL mongo extension
  shell: yes '' | pecl install mongo

Which continuously pipes the ‘y’ character (by default) and a newline character into the install command, automatically causing any prompts to be responded to in the affirmative. For my simple use case it works perfectly, and is more straightforward than writing expect scripts.

Local Provisioning With Ansible

I recently mentioned I’m going all-in on provisioning and running all my development environments in VMs provisioned with Ansible. My Ansible use doesn’t end there though – I’m also using it to provision my MacBook.

I’d read of a few other people doing this – Benjamin Eberlei has a particularly good post on the topic, and was pretty much the inspiration for me doing the same. Setting up a new machine can be a bit of a bore, and while in theory it only happens whenever I upgrade hardware I have been caught out in the past with kernel corruption forcing a reinstall, and I’d like to minimise the hassle should it ever happen again. I have backups , but one of the few upsides of going back to scratch with a machine is that it’s an opportunity for spring clean; a script that effectively lists all the essential components of my computer, and configures them to my taste, makes that so much easier.

Anyway. A couple of people mentioned they’d be interested in the playbooks I wrote to provision my laptop, so I’ve taken out anything personal to me and mirrored them on Github.

Currently the provisioning involves four things:

  • Install Homebrew and various brews
  • Install Homebrew Casks and various casks
  • Enable zsh as default shell and install oh-my-zsh
  • Install dotfiles to configure git, vim, etc. (my dotfiles repo is here)

It’s all straightforward, but a couple of implementation details:

  • I’ve ended up using Alfred as my OS X search tool because I store my casks under /usr/local with the other brew stuff and symlink to /Applications, and Spotlight doesn’t grok symlinks. Instead I configure Alfred to link in the cask files
  • I read on a few places on the web that the method I use to determine my current shell process – echo $SHELL – isn’t foolproof. Works for me, but YMMV

There’s more I could do in terms of what I manage, especially when it comes to dotfile configuration (incidentally one of the best things about automating provisioning in this way is you end up looking at other people’s approaches, and finding out about libraries and tools that you never even knew existed – Thoughtbot’s rcm is one such example here). Also I’m still finding my way with Ansible, so my playbooks can undoubtedly be improved, especially when it comes to idempotency. Nevertheless I’m really happy to have a record of my machine’s starting state, and will enjoy adding to it as time passes.

Vagrant 1.5 Syncing Situation

image

When starting work at BipSync I resolved to put the bits and bobs I’d learned about provisioning with Vagrant and Ansible into practice, and no longer host development environments locally on my MacBook. The reasons were twofold – firstly I wanted my development environment to mirror that of live as closely as possible, and secondly when previously hosting multiple, complex projects on one machine I experienced increased battery drain and frequent slowdowns caused by active background services that I often didn’t even need for my current task. It was a pain to keep track of everything, especially when multiple versions were required. So the attraction of having dev environments I could bring up and down, and if necessary occasionally trash, was a strong one.

I’d played with Vagrant / VirtualBox hosted projects on smallish codebases before, where the default VirtualBox synced folder support was sufficient; however when attempting to host and develop BipSync’s sizeable applications it was quickly clear that this combination wouldn’t be fast enough to be fit for purpose.

Since VirtualBox has been acknowledged as the slowest of Vagrant’s provider options, and we already had Parallels licences for the development of our Windows applications, next I tried out the third party Parallels plugin for Vagrant. Basically this was a non-starter – I immediately ran into bugs that were present in the project’s issue tracker, and the impression I got was that the plugin is still a work-in-progress. The plugin is overseen by Parallels so I’m optimistic that it’ll one day be a first-class provider, but as it was it was unusable.

So I went back to VirtualBox (VMWare was another option but before forking out money on the software and accompanying plugin I wanted to exhaust all other options) and tried the other synced folder types. First I tried the new rsync support, which I’d read good things about. In terms of speed this was very quick because there isn’t really a share to speak of; shared items are copied to the guest rather than mounted. It requires a command to be run constantly, to watch synced files for changes and copy them to the guest, and this proved to be a problem; as issues raised by others attest, the approach wasn’t scaling to large (20k+ files) codebases, which is common in my experience. I tried excluding some folders to see if that would help, but it didn’t.

So after all that I ended up using the NFS sync option, which aside from requiring a password for sudo when booting the VM has proved trouble free, so far. Performance is good, both in terms of the applications themselves, and when browsing and modifying files in the share; I’ve seen PHPStorm occasionally lock up but then it always has ;) It’s certainly good enough to work with now, and I’ll be keeping an eye on the rsync and Parallels options as they should improve in time.

(“Moonlight Synchro” image courtesy of Chris Phutully. See license.)

Goodbye Box UK, Hello BipSync

For the last 7 years I consider myself lucky to have been employed by Box UK. When I joined in 2007 I’d spent the previous 3 years writing what I consider ‘despite software’. Software that did its job despite the efforts of the developer behind it. I was guilty of pretty much every bad practice at some stage; a direct result of me being the sole developer within both my company and my social circles. I didn’t know better and had no-one to tell me otherwise, but by 2007 I knew I needed to address that by working for a company in which there were lots of people smarter and more experienced than me. To “aim to be the dumbest person in the room”, as it’s often put – and never better than by my colleague and mentor at Box UK, Carey Hiles, speaking about being a lone coder in the first Unified Diff meetup put on several years later.

My first assignment at Box UK was making some tweaks to a web form on Promethean’s e-commerce site. My last assignment was rearchitecting the technology behind careerswales.com, a multi-application portal serving millions of users, which is destined to help shape lives of the people of Wales for years to come. In between I’ve created mobile applications, written web services, ported databases, implemented designs, discovered search platforms, pushed to queues, knocked out specifications, made many a calculation and finally – finally – got to grips with git. I’ve taught schoolchildren how to code. I’ve also shared more in-jokes than you could shake a stick at. Seriously.

As a software developer, I can’t think of a better place to learn one’s trade; every couple of months a new project comes along which will either improve existing skills, or demand new ones. Through the work that I’ve done there I’ve had the opportunity to travel the world, meet heroes, and speak at conferences. My colleagues have been everything I’d hoped they’d be – smart, friendly, and more than happy to help. Most importantly, they’re passionate. Box UK have aimed to only hire those who care about the work they do, and that’s clearly reflected in what we’ve achieved.

TL/DR: I’ve had an amazing job these last 7 years, and have loved every single minute of it.

After seven years I’m ready for a new challenge, and a wonderful opportunity at a startup called BipSync means it’s time to move on. I’ll be working with some former colleagues on some exciting software in a domain that’s fresh and interesting. I can’t wait to get stuck in, and hopefully share my experiences. I’d like to thank all at Box UK, especially Benno and those with whom I worked closely with over the years, for all the opportunities I was given and the experiences I’ve had. You’ve undoubtedly made me a better person, and I wish you all the best for the future.

Restore Missing Audio in iMovie for Mac

image

I recently got a new Macbook (running Mavericks) and wanted to transfer the iMovie projects from my older machine (running Mountain Lion). The new machine featured the most recent version of iMovie (iMovie for Mac, version 10.0.2) while my old one was on 9.0.9.

I loaded the projects into iMovie for Mac by copying over the iMovie Events, iMovie Original Movies and iMovie Projects folders via AirDrop. Upon starting iMovie for Mac it prompted me to upgrade my projects to the new format. After some time a dialog appeared notifying me that some events had not been found, and that the upgrade had not been successful.

It turned out that the movie events were fine, but the background audio I’d added (via iTunes) was missing because I’d not copied over my iTunes folder (and hadn’t planned to, since I now use iTunes in the Cloud). Attempting to open a ported project gave me a dialog along the lines of ‘Upgrade now, with missing events, or upgrade later’ – the choice was inconsequential since even by choosing ‘Upgrade later’ before putting the tracks in the places they were expected before opening the movie, the movie didn’t seem to pick up on the fact that they were now there. The waveforms remained blank, and the audio didn’t play. Of course, in most cases I didn’t even know which files were missing because most of these movies were put together months ago.

I eventually fixed it by painstakingly downloading and importing each audio track into the movie alongside the original clip, which triggered a reload of the original. Then the second clip could be removed, leaving the original which now played. It would be nice if iMovie for Mac had a ‘reload all events in this movie’ feature (perhaps it does, and I couldn’t find it – I’ve never found iMovie to have the most intuitive of interfaces)– thankfully I only had 20 or so tracks to fix, but for some more hardcore users it could be a lot worse.

(Waveform image copyright Bernard Goldbach)

Understanding DNS

I’ve never really taken the time to read up on how DNS works. Ten years of web development mean I’ve gradually assumed a pretty good idea through trial and error, but I’ve never really felt 100% confident that I know what I’m doing when registering and administering domain names, particularly when it comes to more obscure record types like SPF. Things work, but I don’t really know why.

I was shopping for a domain name the other day and while on the DNSimple website I was invited to receive a series of emails that offered to outline the DNS basics. I batted away my initial reluctance to sign myself up for spam, and I’m glad I did – the lessons are nicely written, short and to the point, and thankfully aren’t just an excercise in advertising for the registrar. I’ve learned a couple things and reinforced my knowledge of others. I can’t find a signup link now, but I do recommend them to anyone who wants to learn more about DNS.

Box UK Has an Engineering Blog

I’ve always enjoyed reading microblogs like Daring Fireball which briefly editorialise links to content elsewhere on the web – it’s like “Here’s something interesting, here’s some perspective on it, now go read it.”. While we’ve had a blog on the Box UK website for some time, the format of it has been very much long-form. While that means we produce lots of in depth, informative content, the downside is because it can take a while to turn posts around the blog doesn’t suit snappy, quick posts in the microblog style.

Internally, we’re always sharing links to posts or videos in HipChat – stuff that’s relevant to our day-to-day work, stuff that could be useful to us in future, that sort of thing. For a while it’s been on our roadmap to introduce a microblog on the Box UK website, and now we have – last week we lauched the Box UK developer blog. We’ve already had posts on varied topics like SSH, Objective-C, Vim and Amazon S3. Keep an eye on it!

Including Filtered Thumbnail Data With the WordPress JSON API Plugin

The JSON API plugin is a great tool for exposing your WordPress data. I’ve used it to provide content to a few mobile applications; it’s well featured and most importantly, quite flexible. If your content providers are happiest using a CMS like WordPress, your project can benefit by allowing them to use the authoring tools they’re comfortable with while allowing you to easily access the content they’re creating, either to use directly within your application, or as I tend to do, by exporting the data into another application which then proxies the data to mobile applications. I find the latter approach gives you greater control over features like caching, while making it easier to relate the data to domain models that aren’t handled by WordPress.

But I digress! This post was conceived because when requesting all posts of a certain type through the plugin, response times were hideously slow. The request I was making looked like this:

http://domain.com/?json=get_posts&post_type=artist&count=1000&order=ASC

Reading the documentation for the plugin, I found that it was possible to filter the response to only contain properties that I wanted, which I figured should reduce the amount of work WordPress was having to do:

http://domain.com/?json=get_posts&post_type=artist&count=2&order=ASC&include=id,title,content,custom_fields,thumbnail_images

This proved to be true, but I hit a snag when trying to include the thumbnail_images property – adding it to the CSV of whitelisted fields had no effect. Through trial and error it transpired that including the thumbnail field also has the effect of including the thumbnail, thumbnail_size, and thumbnail_images fields. The final URL ended up looking like this:

http://domain.com/?json=get_posts&post_type=artist&count=2&order=ASC&include=id,title,content,custom_fields,thumbnail

Problem solved!

On Incremental Improvement

Twice today I’ve seen references to the benefits of iterative improvement, and I found that the topic resonated with me. At the moment I’m halfway though a pretty lengthy software project in which it’s sometimes hard to see the wood for the trees. The backlog is substantial, resulting in a hefty set of tasks for our team. In these situations, where the to-do list doesn’t seem to change day to day, it’s easy to feel crippled by inertia. However, these two references illustrate that we can succeed provided we ensure we’re making healthy, gradual progress.

One might expect that the references I’m alluding to originate from a tome of software development, like “The Pragmatic Programmer“, or from one of the many celebrated development gurus of Twitter, but in fact they are not directly concerned with software at all. The first comes from a talk discussing the pros and cons of iterative improvement , linked to by my colleague (and all-round good chap) Owen Phelps, by Tim Harford at last year’s Wired UK conference:

If you put together enough marginal improvements, in enough areas, you get something that’s truly outstanding.

Give it a watch – it’s entertaining and informative.

The second reference came from an altogether different source – an interview with renowned mix engineer Alan Moulder in this month’s issue of Sound on Sound magazine, in which he discusses his approach when applying VST plugins to his music projects:

All the things I use do something to make the sound a tiny bit better, and if  you add everything together, the end result will be a lot better… It’s simple: better is better, whether it’s a tiny bit better or a lot better.

Through never-ending sprints, features and tasks, iterative improvement is something all development teams should strive to achieve.