Craig Marvelley

Software and such

Local Provisioning With Ansible

I recently mentioned I’m going all-in on provisioning and running all my development environments in VMs provisioned with Ansible. My Ansible use doesn’t end there though – I’m also using it to provision my MacBook.

I’d read of a few other people doing this – Benjamin Eberlei has a particularly good post on the topic, and was pretty much the inspiration for me doing the same. Setting up a new machine can be a bit of a bore, and while in theory it only happens whenever I upgrade hardware I have been caught out in the past with kernel corruption forcing a reinstall, and I’d like to minimise the hassle should it ever happen again. I have backups , but one of the few upsides of going back to scratch with a machine is that it’s an opportunity for spring clean; a script that effectively lists all the essential components of my computer, and configures them to my taste, makes that so much easier.

Anyway. A couple of people mentioned they’d be interested in the playbooks I wrote to provision my laptop, so I’ve taken out anything personal to me and mirrored them on Github.

Currently the provisioning involves four things:

  • Install Homebrew and various brews
  • Install Homebrew Casks and various casks
  • Enable zsh as default shell and install oh-my-zsh
  • Install dotfiles to configure git, vim, etc. (my dotfiles repo is here)

It’s all straightforward, but a couple of implementation details:

  • I’ve ended up using Alfred as my OS X search tool because I store my casks under /usr/local with the other brew stuff and symlink to /Applications, and Spotlight doesn’t grok symlinks. Instead I configure Alfred to link in the cask files
  • I read on a few places on the web that the method I use to determine my current shell process – echo $SHELL – isn’t foolproof. Works for me, but YMMV

There’s more I could do in terms of what I manage, especially when it comes to dotfile configuration (incidentally one of the best things about automating provisioning in this way is you end up looking at other people’s approaches, and finding out about libraries and tools that you never even knew existed – Thoughtbot’s rcm is one such example here). Also I’m still finding my way with Ansible, so my playbooks can undoubtedly be improved, especially when it comes to idempotency. Nevertheless I’m really happy to have a record of my machine’s starting state, and will enjoy adding to it as time passes.

Vagrant 1.5 Syncing Situation

image

When starting work at BipSync I resolved to put the bits and bobs I’d learned about provisioning with Vagrant and Ansible into practice, and no longer host development environments locally on my MacBook. The reasons were twofold – firstly I wanted my development environment to mirror that of live as closely as possible, and secondly when previously hosting multiple, complex projects on one machine I experienced increased battery drain and frequent slowdowns caused by active background services that I often didn’t even need for my current task. It was a pain to keep track of everything, especially when multiple versions were required. So the attraction of having dev environments I could bring up and down, and if necessary occasionally trash, was a strong one.

I’d played with Vagrant / VirtualBox hosted projects on smallish codebases before, where the default VirtualBox synced folder support was sufficient; however when attempting to host and develop BipSync’s sizeable applications it was quickly clear that this combination wouldn’t be fast enough to be fit for purpose.

Since VirtualBox has been acknowledged as the slowest of Vagrant’s provider options, and we already had Parallels licences for the development of our Windows applications, next I tried out the third party Parallels plugin for Vagrant. Basically this was a non-starter – I immediately ran into bugs that were present in the project’s issue tracker, and the impression I got was that the plugin is still a work-in-progress. The plugin is overseen by Parallels so I’m optimistic that it’ll one day be a first-class provider, but as it was it was unusable.

So I went back to VirtualBox (VMWare was another option but before forking out money on the software and accompanying plugin I wanted to exhaust all other options) and tried the other synced folder types. First I tried the new rsync support, which I’d read good things about. In terms of speed this was very quick because there isn’t really a share to speak of; shared items are copied to the guest rather than mounted. It requires a command to be run constantly, to watch synced files for changes and copy them to the guest, and this proved to be a problem; as issues raised by others attest, the approach wasn’t scaling to large (20k+ files) codebases, which is common in my experience. I tried excluding some folders to see if that would help, but it didn’t.

So after all that I ended up using the NFS sync option, which aside from requiring a password for sudo when booting the VM has proved trouble free, so far. Performance is good, both in terms of the applications themselves, and when browsing and modifying files in the share; I’ve seen PHPStorm occasionally lock up but then it always has ;) It’s certainly good enough to work with now, and I’ll be keeping an eye on the rsync and Parallels options as they should improve in time.

(“Moonlight Synchro” image courtesy of Chris Phutully. See license.)

Goodbye Box UK, Hello BipSync

For the last 7 years I consider myself lucky to have been employed by Box UK. When I joined in 2007 I’d spent the previous 3 years writing what I consider ‘despite software’. Software that did its job despite the efforts of the developer behind it. I was guilty of pretty much every bad practice at some stage; a direct result of me being the sole developer within both my company and my social circles. I didn’t know better and had no-one to tell me otherwise, but by 2007 I knew I needed to address that by working for a company in which there were lots of people smarter and more experienced than me. To “aim to be the dumbest person in the room”, as it’s often put – and never better than by my colleague and mentor at Box UK, Carey Hiles, speaking about being a lone coder in the first Unified Diff meetup put on several years later.

My first assignment at Box UK was making some tweaks to a web form on Promethean’s e-commerce site. My last assignment was rearchitecting the technology behind careerswales.com, a multi-application portal serving millions of users, which is destined to help shape lives of the people of Wales for years to come. In between I’ve created mobile applications, written web services, ported databases, implemented designs, discovered search platforms, pushed to queues, knocked out specifications, made many a calculation and finally – finally – got to grips with git. I’ve taught schoolchildren how to code. I’ve also shared more in-jokes than you could shake a stick at. Seriously.

As a software developer, I can’t think of a better place to learn one’s trade; every couple of months a new project comes along which will either improve existing skills, or demand new ones. Through the work that I’ve done there I’ve had the opportunity to travel the world, meet heroes, and speak at conferences. My colleagues have been everything I’d hoped they’d be – smart, friendly, and more than happy to help. Most importantly, they’re passionate. Box UK have aimed to only hire those who care about the work they do, and that’s clearly reflected in what we’ve achieved.

TL/DR: I’ve had an amazing job these last 7 years, and have loved every single minute of it.

After seven years I’m ready for a new challenge, and a wonderful opportunity at a startup called BipSync means it’s time to move on. I’ll be working with some former colleagues on some exciting software in a domain that’s fresh and interesting. I can’t wait to get stuck in, and hopefully share my experiences. I’d like to thank all at Box UK, especially Benno and those with whom I worked closely with over the years, for all the opportunities I was given and the experiences I’ve had. You’ve undoubtedly made me a better person, and I wish you all the best for the future.

Restore Missing Audio in iMovie for Mac

image

I recently got a new Macbook (running Mavericks) and wanted to transfer the iMovie projects from my older machine (running Mountain Lion). The new machine featured the most recent version of iMovie (iMovie for Mac, version 10.0.2) while my old one was on 9.0.9.

I loaded the projects into iMovie for Mac by copying over the iMovie Events, iMovie Original Movies and iMovie Projects folders via AirDrop. Upon starting iMovie for Mac it prompted me to upgrade my projects to the new format. After some time a dialog appeared notifying me that some events had not been found, and that the upgrade had not been successful.

It turned out that the movie events were fine, but the background audio I’d added (via iTunes) was missing because I’d not copied over my iTunes folder (and hadn’t planned to, since I now use iTunes in the Cloud). Attempting to open a ported project gave me a dialog along the lines of ‘Upgrade now, with missing events, or upgrade later’ – the choice was inconsequential since even by choosing ‘Upgrade later’ before putting the tracks in the places they were expected before opening the movie, the movie didn’t seem to pick up on the fact that they were now there. The waveforms remained blank, and the audio didn’t play. Of course, in most cases I didn’t even know which files were missing because most of these movies were put together months ago.

I eventually fixed it by painstakingly downloading and importing each audio track into the movie alongside the original clip, which triggered a reload of the original. Then the second clip could be removed, leaving the original which now played. It would be nice if iMovie for Mac had a ‘reload all events in this movie’ feature (perhaps it does, and I couldn’t find it – I’ve never found iMovie to have the most intuitive of interfaces)– thankfully I only had 20 or so tracks to fix, but for some more hardcore users it could be a lot worse.

(Waveform image copyright Bernard Goldbach)

Understanding DNS

I’ve never really taken the time to read up on how DNS works. Ten years of web development mean I’ve gradually assumed a pretty good idea through trial and error, but I’ve never really felt 100% confident that I know what I’m doing when registering and administering domain names, particularly when it comes to more obscure record types like SPF. Things work, but I don’t really know why.

I was shopping for a domain name the other day and while on the DNSimple website I was invited to receive a series of emails that offered to outline the DNS basics. I batted away my initial reluctance to sign myself up for spam, and I’m glad I did – the lessons are nicely written, short and to the point, and thankfully aren’t just an excercise in advertising for the registrar. I’ve learned a couple things and reinforced my knowledge of others. I can’t find a signup link now, but I do recommend them to anyone who wants to learn more about DNS.

Box UK Has an Engineering Blog

I’ve always enjoyed reading microblogs like Daring Fireball which briefly editorialise links to content elsewhere on the web – it’s like “Here’s something interesting, here’s some perspective on it, now go read it.”. While we’ve had a blog on the Box UK website for some time, the format of it has been very much long-form. While that means we produce lots of in depth, informative content, the downside is because it can take a while to turn posts around the blog doesn’t suit snappy, quick posts in the microblog style.

Internally, we’re always sharing links to posts or videos in HipChat – stuff that’s relevant to our day-to-day work, stuff that could be useful to us in future, that sort of thing. For a while it’s been on our roadmap to introduce a microblog on the Box UK website, and now we have – last week we lauched the Box UK developer blog. We’ve already had posts on varied topics like SSH, Objective-C, Vim and Amazon S3. Keep an eye on it!

Including Filtered Thumbnail Data With the WordPress JSON API Plugin

The JSON API plugin is a great tool for exposing your WordPress data. I’ve used it to provide content to a few mobile applications; it’s well featured and most importantly, quite flexible. If your content providers are happiest using a CMS like WordPress, your project can benefit by allowing them to use the authoring tools they’re comfortable with while allowing you to easily access the content they’re creating, either to use directly within your application, or as I tend to do, by exporting the data into another application which then proxies the data to mobile applications. I find the latter approach gives you greater control over features like caching, while making it easier to relate the data to domain models that aren’t handled by WordPress.

But I digress! This post was conceived because when requesting all posts of a certain type through the plugin, response times were hideously slow. The request I was making looked like this:

http://domain.com/?json=get_posts&post_type=artist&count=1000&order=ASC

Reading the documentation for the plugin, I found that it was possible to filter the response to only contain properties that I wanted, which I figured should reduce the amount of work WordPress was having to do:

http://domain.com/?json=get_posts&post_type=artist&count=2&order=ASC&include=id,title,content,custom_fields,thumbnail_images

This proved to be true, but I hit a snag when trying to include the thumbnail_images property – adding it to the CSV of whitelisted fields had no effect. Through trial and error it transpired that including the thumbnail field also has the effect of including the thumbnail, thumbnail_size, and thumbnail_images fields. The final URL ended up looking like this:

http://domain.com/?json=get_posts&post_type=artist&count=2&order=ASC&include=id,title,content,custom_fields,thumbnail

Problem solved!

On Incremental Improvement

Twice today I’ve seen references to the benefits of iterative improvement, and I found that the topic resonated with me. At the moment I’m halfway though a pretty lengthy software project in which it’s sometimes hard to see the wood for the trees. The backlog is substantial, resulting in a hefty set of tasks for our team. In these situations, where the to-do list doesn’t seem to change day to day, it’s easy to feel crippled by inertia. However, these two references illustrate that we can succeed provided we ensure we’re making healthy, gradual progress.

One might expect that the references I’m alluding to originate from a tome of software development, like “The Pragmatic Programmer“, or from one of the many celebrated development gurus of Twitter, but in fact they are not directly concerned with software at all. The first comes from a talk discussing the pros and cons of iterative improvement , linked to by my colleague (and all-round good chap) Owen Phelps, by Tim Harford at last year’s Wired UK conference:

If you put together enough marginal improvements, in enough areas, you get something that’s truly outstanding.

Give it a watch – it’s entertaining and informative.

The second reference came from an altogether different source – an interview with renowned mix engineer Alan Moulder in this month’s issue of Sound on Sound magazine, in which he discusses his approach when applying VST plugins to his music projects:

All the things I use do something to make the sound a tiny bit better, and if  you add everything together, the end result will be a lot better… It’s simple: better is better, whether it’s a tiny bit better or a lot better.

Through never-ending sprints, features and tasks, iterative improvement is something all development teams should strive to achieve.

Symfony2: JSON Responses for XHR Errors and Authentication Failures

I’ve been working on a Symfony2 application whose user interface is presented as a single page application that makes heavy use of JavaScript. The app sends XHR requests to the Symfony2 backend API to retrieve and modify data, and it uses Symfony’s authorisation/authentication functionality to protect resources so they are only accessible to logged in users.

In an ideal world every request made by the application will succeed, but in reality things often go wrong; the user may have provided invalid data, their session may have expired meaning they are no longer logged in, or we may have a bug in our API code which causes a server error. With an out-of-the-box installation of Symfony2, each of these scenarios will result in the default error handler rendering a HTML page which isn’t much use to our client JavaScript application – instead we want the server to issue a JSON response to our AJAX request, complete with some contextual error details, with which we can give the user a meaningful error message.

I did a bit of Googling and came across a few posts which dealt with some of these challenges, and by putting them all together I arrived at a solution which I’ll detail here in case it’s of use to anyone else.

I’ve put together a demo app which hopefully illustrates everything. Have a play with it (installation instructions are in the README), then read on :)

The app consists of a few components:

Client side:

  • A jQuery global AJAX error handler
  • A login form – I have two, a Symfony2/Twig-based HTML login page which the user is redirected to if they try to access the app without a valid session, and a JavaScript login dialog which we will display if the server rejects an AJAX request due to authentication/authorisation failure, allowing the user to reauthenticate without leaving the app

Server side:

  • An authentication failure handler
  • An authentication success handler
  • A kernel exception listener

As far as the rest of the app goes, I’m using the FOSUserBundle with a standard configuration, which supplies the rest of the auth functionality, including the HTML login page.

Let’s start with the server components – three classes and a bit of config:

View the code on <a href="https://gist.github.com/4340230">Gist</a>.

The XHRAuthenticationSuccessHandler and XHRAuthenticationFailureHandler class’ code is triggered when the user logs in, or fails to log in, respectively. We check to see if the original request was an AJAX (XHR) one, and if so we respond with JSON. If it’s a failure we’re dealing with, we also include the exception message to give the user some feedback – note that this is a quick solution; it might be prudent to vet the message, or to translate it. I’ve also included a ‘success’ property which is a JavaScript convention that some frameworks (like ExtJS) use to execute appropriate callbacks. Finally, we wire up these handlers with the security component in our application’s security configuration.

The XHRCoreExceptionListener class code is triggered whenever a kernel exception occurs – check the service definition for this listener, you’ll see we tell Symfony to call its onCoreException method whenever an exception event is fired. Again, we only want to act if the request that caused the exception was an AJAX one. Assuming it is, we try to work out the status code to return from the exception code – if it’s a valid HTTP code, we use that, otherwise we assume a server error (500). As before, we’re reusing the exception message to provide context – but in this case, where the exception could relate to anything in the system (like a database query) we’d really want to be cautious about exposing it to the user, so this code certainly isn’t suitable for a production environment.

That’s it for the server, now for the client. All the functionality is in src/Acme/DemoBundle/Resources/views/Welcome/index.html.js.

First we define a global error handler, which will be fired whenever an uncaught AJAX error occurs:

View the code on <a href="https://gist.github.com/4348746">Gist</a>.

If the problem is that the user does not have a valid authentication token, we invite them to log in.

View the code on <a href="https://gist.github.com/4348760">Gist</a>.

The login process is interesting – the modal login form is essentially a duplicate of the HTML one, but lacks the CSRF token which is automatically injected into the HTML form by Symfony. This helps prevent spoofing so I didn’t want to remove it, so the approach I took was to request the HTML login page, capture the value of the CSRF field in the response, then use that in a second request to actually authenticate the user. This meant I could reuse the same authentication code on the server.

The demo app also includes two other demonstrations – making a valid request (which will fail, and force a login, if the user does not have a valid session) and an invalid one (which displays the error message returned by the server). There’s not much to say about those, the code should be self explanatory.

That’s it really. The code is a little rough and ready, but hopefully it’ll give you enough to go on if you’re trying to do something similar!

Symfony2: Managing a User Entity Role With a Form Event Subscriber

I’m working on a Symfony2 application which makes use of roles to manage what Users of the system are able to do within the app. Symfony places no limit on the amount of roles the User can have, but in the context of my application, there are only two – a base user (ROLE_USER, in Symfony parlance), and an administrator (ROLE_ADMIN). As is Symfony custom, all users would possess the default role, but the administrator role would be granted on a per user basis. I’m using the FOSUserBundle and Doctrine, so a User’s roles are stored within an array on that User’ Doctrine entity.

I began crafting a form (using the Symfony Form component) to manage user details. When creating forms that are bound to Doctrine I’ve generally found that I’ve needed to do little in the way of form customisation; the form component does a good job of figuring out which form fields to use based on the type of the variable it is given. So name, being a string, gets modelled as a textfield, while their registration date, being a DateTime object, causes a series of select boxes to be created which allow dates and times to be entered. So far so good.

But of course there has to be a reason for a blog post, and this is mine: I wanted a checkbox to toggle on/of the administrator role for each user. Since roles are stored in an array on the bound entity, the default is to render a collection of fields, which leads to an array of values being submitted by the form. Trying to modify the field definition to a ‘single’ field, like a checkbox, led to an error as the Form component doesn’t know how to map an array to a single value. A multi-value field wasn’t an option as I didn’t want to expose the roles (which wouldn’t necessarily make sense to an end user), and I didn’t want to get into messing about with Twig templates if I could help it; a bit of Googling led me to this article on the cookbook, which describes how to dynamically add elements to forms using a Form event subscriber; this turned out to be just what I needed.

Reading through the cookbook article, the following plan took shape: when the form is created, add a checkbox to the form which is ticked depending on whether the User object bound to the form has the administrator role. When the form is submitted, if the checkbox is ticked, grant the user that role. If it is not, remove the role from their array of granted roles.

To implement this, I needed two classes; a Form class, of course, and a new class that implements Symfony\Component\EventDispatcher\EventSubscriberInterface and subscribes to events on the form. As in the cookbook example, when the ‘preSetData’ event fires I’d be adding a field to the form. In addition, when the ‘bind’ event fires, I’d be using the value of the field to modify the bound object, in this case the User being managed. This is what I ended up with:

View the code on <a href="https://gist.github.com/4051034">Gist</a>.

I’m manually adding the subscriber to the form in UserType::buildForm(), which feels a bit weird when the usual approach would be to define a service and tag it appropriately. I’m not sure if that’s possible with Form subscribers, that’s just how the cookbook approach went. I’d imagine there are other ways I could have achieved the same result, but I like this approach because I don’t have to modify my User class at all, and I don’t have to delve into form field customisation.

The only downside I’ve found so far is that there doesn’t seem to be a way to order the fields that get added in the listener – they are always placed at the start of the collection, regardless of when the subscriber is attached to the form. This means that some template work is necessary to place the field as desired.