Craig Marvelley

Software and such

UISplitViewController Collapsing and Separating in iOS 8

The new UISplitViewController class and delegate methods in iOS 8 is interesting, and caused me a lot of head scratching last week. The problem Brent Simmons describes here, where after rotating the device from regular to compact size class and back again iOS uses the topmost master view controller as the new detail view, instead of the previous detail view, is roughly the same issue I had.

As far as I can tell so far, I think the following approach solves the issue. Note that I don’t think I have the same set up as Brent - I have a navigation stack in each of the master and detail sides - but I think the general approach is sound, based on advice from Apple in the the “Building Adaptive Apps with UIKit” session from this year’s WWDC, and the “AdaptivePhotos” sample app code they provide.

My UISplitViewController delegate looks like this:

  • In the ‘collapse’ method, which is called when the device rotates to portrait, we disregard the detail view controller and use the master view controller if the detail view has no content
  • In the ‘merge’ method, we look to see if there’s a navigation controller which is displaying a NoteViewController (this controller is currently only used as a detail view, exclusively, so it’s a safe bet). If it exists, we use it - otherwise we retrieve the ‘stock’ note view controller stack from the storyboard and use that.

This approach seems to have worked well so far. The majority of issues I’ve found have been on the iPhone 6 Plus, which combines the single hierarchy approach of the smaller form factor devices in regular mode, with the side-by-side dual view approach or the larger form factors in compact mode - this hybrid strategy seems to require more direction on our part, whereas elsewhere iOS tends to guess right on its own.

A final note - having to manually ensure the back button exists and is configured on the detail view controller is a bit icky, but I needed to do this to avoid a crash. I’d love to hear from anyone with more elegant ways to handle this.

NOOB 0.9.19 - ‘You Shouldn’t Have to Click It’

Gav and myself mainly talk about software company processes and principles, with a little bit on communication tools and AWS thrown in for good measure.

This is the first podcast we’ve recorded remotely, since Gav moved to Bristol. Following this Macworld post’s directions seemed to work; we each recorded our own audio via Quicktime, and synced it up with our “master” recording from Skype, made possible via Call Recorder. I sequenced it all in GarageBand, with the odd final edit in Audacity.

We’re five episodes in since we rebooted the podcast back in June. There’s still a lot of room for improvement (especially, for me, in my speech - I find it hard to speak in a slow cadence, and concentrating on that often distracts me from the topic at hand), but I feel that each episode has gained in professionalism, and they’re a lot of fun to do.

Check it out on Soundcloud here, or even subscribe on iTunes thanks to Soundcloud’s currently-in-beta integration with that service, which we’re fortunate to have been given access to.

Waiting for SSH

Update 2014/09/06 - Michael DeHaan mentions on Twitter that he adds in a few seconds of sleep with the pause module to ensure the SSH port is open.

I’ve spent the day provisioning a whole lot of EC2 instances with Ansible from a control machine in the cloud. This involves two stages: firstly an instance is launched, and then once SSH is available (using Ansible’s wait_for module), the second stage of more detailed provisioning begins.

An issue I’d experienced a few times previously but had not been able to pinpoint, was that often the wait_for module failed to identify that SSH is ready. My Ansible task looked like this:

1
2
3
4
5
6
7
8
9
- name: Wait until SSH is available
  local_action: 
    module: wait_for 
    host: "{{ item.public_dns_name }}"
    port: 22 
    delay: 60 
    timeout: 320 
    state: started
  with_items: launched_instances.instances

That task would often time out, but in such cases if I were to immediately try to SSH from the terminal it would succeed, which was odd indeed.

Today this behaviour was consistent, and I eventually realised that in the task I was using the instance’s public DNS name, whereas when I was connecting via the terminal I used the public IP address. Indeed, changing the task to use the IP address seems to have made the whole thing a lot more reliable:

1
2
3
4
5
6
7
8
9
- name: Wait until SSH is available
  local_action: 
    module: wait_for 
    host: "{{ item.public_ip }}"
    port: 22 
    delay: 60 
    timeout: 320 
    state: started
  with_items: launched_instances.instances

I’m guessing that on my Mac (where this was rarely an issue) the DNS cache updates quicker than it does on the control machine in EC2, where this problem was more frequent - using the explicit IP address renders the issue moot.

Never Out of Beta Returns!

I love podcasts. Since I moved house and gained a commute I’ve whiled away the time to and from work listening to The Talk Show, The Record, The Guardian’s Football Weekly, and a couple of other great shows. Enough that I’m barely able to listen to them all. They made me pine for Never Out Of Beta though, a podcast Gav and Carey started a couple of years ago that I and a few friends would guest on, which was great fun.

But my longing is over - Gav and I have rebooted the show, and the first episode is available now via Soundcloud!

A couple of notes:

  • Production “issues” (well, I forgot a bunch of audio equipment) meant that we had to record this one via my Macbook’s built in microphone. It turned out ok, but we’ve got higher standards which hopefully we’ll get closer to next time
  • It took longer than I’d have liked to get the episode out because I’m learning the ropes when it comes to editing. The WWDC stuff is a bit old hat now, but get past that and I think there’s some interesting discussion around the DevOps mindset and provisioning with Ansible
  • An iTunes feed is coming as soon as I can sort one out - I listen to podcasts through the Apple Podcast app so I’m eager to make it available there
  • Inspired by The Talk Show’s recent switch we’re hosting the show on SoundCloud, which has been perfect so far

As with pretty much everything else Gav and I do, it might take a few iterations before this is something we’re really happy with, and feedback to aid us in getting there will be very much appreciated.

If you like tech, you might like this. Check it out!

Handling Interactive Ansible Tasks

I recently re-ran some Ansible provisioning scripts after upgrading the base box to an Ubuntu 14.04 image and found they stalled midway through. The cause? One task involved installing the PECL mongo module, and the installation process now prompts the user to decide whether or not to build with Cyrus SASL support. I couldn’t see a way to force a decision via the PECL installer, and Ansible can’t respond to the prompt, so the provisioning process hung while awaiting an answer.

I found two ways to solve this.

I’ve been expecting you

I did a bit of Googling and after coming across this post on the Ansible forum, I took a look at expect. Expect lets you script interactions with a spawned command, using regex to match against prompt text, and send appropriate responses. It’s a sound approach; I wrote a script that looked like this:

1
2
3
4
5
6
7
8
#!/usr/bin/expect

spawn pecl install mongo

expect "Build with Cyrus SASL"
send "\r"

expect eof

And executed it on the box using Ansible’s script module:

1
2
- name: Install PECL mongo extension
  script: install_pecl_mongo.expect

Mission achieved. I felt very pleased with myself for about 30 seconds, which is how long it took for Paul to wander over and tell me there was a simpler way to do it. Expect is a really good solution for scripting varied or complex responses, but I wasn’t faced with that problem here…

Yes man

In this specific case, all I wanted to do was answer a prompt which needed a yes or no answer, and I wasn’t concerned which I went with. There’s a linux command called yes which “outputs an affirmative response, or a user-defined string of text continuously until killed”, which is just what I needed.

So now my task looks like this:

1
2
- name: Install PECL mongo extension
  shell: yes '' | pecl install mongo

Which continuously pipes the ‘y’ character (by default) and a newline character into the install command, automatically causing any prompts to be responded to in the affirmative. For my simple use case it works perfectly, and is more straightforward than writing expect scripts.

Local Provisioning With Ansible

I recently mentioned I’m going all-in on provisioning and running all my development environments in VMs provisioned with Ansible. My Ansible use doesn’t end there though - I’m also using it to provision my MacBook.

I’d read of a few other people doing this - Benjamin Eberlei has a particularly good post on the topic, and was pretty much the inspiration for me doing the same. Setting up a new machine can be a bit of a bore, and while in theory it only happens whenever I upgrade hardware I have been caught out in the past with kernel corruption forcing a reinstall, and I’d like to minimise the hassle should it ever happen again. I have backups , but one of the few upsides of going back to scratch with a machine is that it’s an opportunity for spring clean; a script that effectively lists all the essential components of my computer, and configures them to my taste, makes that so much easier.

Anyway. A couple of people mentioned they’d be interested in the playbooks I wrote to provision my laptop, so I’ve taken out anything personal to me and mirrored them on Github.

Currently the provisioning involves four things:

  • Install Homebrew and various brews
  • Install Homebrew Casks and various casks
  • Enable zsh as default shell and install oh-my-zsh
  • Install dotfiles to configure git, vim, etc. (my dotfiles repo is here)

It’s all straightforward, but a couple of implementation details:

  • I’ve ended up using Alfred as my OS X search tool because I store my casks under /usr/local with the other brew stuff and symlink to /Applications, and Spotlight doesn’t grok symlinks. Instead I configure Alfred to link in the cask files
  • I read on a few places on the web that the method I use to determine my current shell process - echo $SHELL - isn’t foolproof. Works for me, but YMMV

There’s more I could do in terms of what I manage, especially when it comes to dotfile configuration (incidentally one of the best things about automating provisioning in this way is you end up looking at other people’s approaches, and finding out about libraries and tools that you never even knew existed - Thoughtbot’s rcm is one such example here). Also I’m still finding my way with Ansible, so my playbooks can undoubtedly be improved, especially when it comes to idempotency. Nevertheless I’m really happy to have a record of my machine’s starting state, and will enjoy adding to it as time passes.

Vagrant 1.5 Syncing Situation

image

When starting work at BipSync I resolved to put the bits and bobs I’d learned about provisioning with Vagrant and Ansible into practice, and no longer host development environments locally on my MacBook. The reasons were twofold - firstly I wanted my development environment to mirror that of live as closely as possible, and secondly when previously hosting multiple, complex projects on one machine I experienced increased battery drain and frequent slowdowns caused by active background services that I often didn’t even need for my current task. It was a pain to keep track of everything, especially when multiple versions were required. So the attraction of having dev environments I could bring up and down, and if necessary occasionally trash, was a strong one.

I’d played with Vagrant / VirtualBox hosted projects on smallish codebases before, where the default VirtualBox synced folder support was sufficient; however when attempting to host and develop BipSync’s sizeable applications it was quickly clear that this combination wouldn’t be fast enough to be fit for purpose.

Since VirtualBox has been acknowledged as the slowest of Vagrant’s provider options, and we already had Parallels licences for the development of our Windows applications, next I tried out the third party Parallels plugin for Vagrant. Basically this was a non-starter - I immediately ran into bugs that were present in the project’s issue tracker, and the impression I got was that the plugin is still a work-in-progress. The plugin is overseen by Parallels so I’m optimistic that it’ll one day be a first-class provider, but as it was it was unusable.

So I went back to VirtualBox (VMWare was another option but before forking out money on the software and accompanying plugin I wanted to exhaust all other options) and tried the other synced folder types. First I tried the new rsync support, which I’d read good things about. In terms of speed this was very quick because there isn’t really a share to speak of; shared items are copied to the guest rather than mounted. It requires a command to be run constantly, to watch synced files for changes and copy them to the guest, and this proved to be a problem; as issues raised by others attest, the approach wasn’t scaling to large (20k+ files) codebases, which is common in my experience. I tried excluding some folders to see if that would help, but it didn’t.

So after all that I ended up using the NFS sync option, which aside from requiring a password for sudo when booting the VM has proved trouble free, so far. Performance is good, both in terms of the applications themselves, and when browsing and modifying files in the share; I’ve seen PHPStorm occasionally lock up but then it always has ;) It’s certainly good enough to work with now, and I’ll be keeping an eye on the rsync and Parallels options as they should improve in time.

(“Moonlight Synchro” image courtesy of Chris Phutully. See license.)

Goodbye Box UK, Hello BipSync

For the last 7 years I consider myself lucky to have been employed by Box UK. When I joined in 2007 I’d spent the previous 3 years writing what I consider ‘despite software’. Software that did its job despite the efforts of the developer behind it. I was guilty of pretty much every bad practice at some stage; a direct result of me being the sole developer within both my company and my social circles. I didn’t know better and had no-one to tell me otherwise, but by 2007 I knew I needed to address that by working for a company in which there were lots of people smarter and more experienced than me. To “aim to be the dumbest person in the room”, as it’s often put - and never better than by my colleague and mentor at Box UK, Carey Hiles, speaking about being a lone coder in the first Unified Diff meetup put on several years later.

My first assignment at Box UK was making some tweaks to a web form on Promethean’s e-commerce site. My last assignment was rearchitecting the technology behind careerswales.com, a multi-application portal serving millions of users, which is destined to help shape lives of the people of Wales for years to come. In between I’ve created mobile applications, written web services, ported databases, implemented designs, discovered search platforms, pushed to queues, knocked out specifications, made many a calculation and finally - finally - got to grips with git. I’ve taught schoolchildren how to code. I’ve also shared more in-jokes than you could shake a stick at. Seriously.

As a software developer, I can’t think of a better place to learn one’s trade; every couple of months a new project comes along which will either improve existing skills, or demand new ones. Through the work that I’ve done there I’ve had the opportunity to travel the world, meet heroes, and speak at conferences. My colleagues have been everything I’d hoped they’d be - smart, friendly, and more than happy to help. Most importantly, they’re passionate. Box UK have aimed to only hire those who care about the work they do, and that’s clearly reflected in what we’ve achieved.

TL/DR: I’ve had an amazing job these last 7 years, and have loved every single minute of it.

After seven years I’m ready for a new challenge, and a wonderful opportunity at a startup called BipSync means it’s time to move on. I’ll be working with some former colleagues on some exciting software in a domain that’s fresh and interesting. I can’t wait to get stuck in, and hopefully share my experiences. I’d like to thank all at Box UK, especially Benno and those with whom I worked closely with over the years, for all the opportunities I was given and the experiences I’ve had. You’ve undoubtedly made me a better person, and I wish you all the best for the future.

Restore Missing Audio in iMovie for Mac

image

I recently got a new Macbook (running Mavericks) and wanted to transfer the iMovie projects from my older machine (running Mountain Lion). The new machine featured the most recent version of iMovie (iMovie for Mac, version 10.0.2) while my old one was on 9.0.9.

I loaded the projects into iMovie for Mac by copying over the iMovie Events, iMovie Original Movies and iMovie Projects folders via AirDrop. Upon starting iMovie for Mac it prompted me to upgrade my projects to the new format. After some time a dialog appeared notifying me that some events had not been found, and that the upgrade had not been successful.

It turned out that the movie events were fine, but the background audio I’d added (via iTunes) was missing because I’d not copied over my iTunes folder (and hadn’t planned to, since I now use iTunes in the Cloud). Attempting to open a ported project gave me a dialog along the lines of ‘Upgrade now, with missing events, or upgrade later’ - the choice was inconsequential since even by choosing ‘Upgrade later’ before putting the tracks in the places they were expected before opening the movie, the movie didn’t seem to pick up on the fact that they were now there. The waveforms remained blank, and the audio didn’t play. Of course, in most cases I didn’t even know which files were missing because most of these movies were put together months ago.

I eventually fixed it by painstakingly downloading and importing each audio track into the movie alongside the original clip, which triggered a reload of the original. Then the second clip could be removed, leaving the original which now played. It would be nice if iMovie for Mac had a ‘reload all events in this movie’ feature (perhaps it does, and I couldn’t find it - I’ve never found iMovie to have the most intuitive of interfaces)- thankfully I only had 20 or so tracks to fix, but for some more hardcore users it could be a lot worse.

(Waveform image copyright Bernard Goldbach)

Understanding DNS

I’ve never really taken the time to read up on how DNS works. Ten years of web development mean I’ve gradually assumed a pretty good idea through trial and error, but I’ve never really felt 100% confident that I know what I’m doing when registering and administering domain names, particularly when it comes to more obscure record types like SPF. Things work, but I don’t really know why.

I was shopping for a domain name the other day and while on the DNSimple website I was invited to receive a series of emails that offered to outline the DNS basics. I batted away my initial reluctance to sign myself up for spam, and I’m glad I did - the lessons are nicely written, short and to the point, and thankfully aren’t just an excercise in advertising for the registrar. I’ve learned a couple things and reinforced my knowledge of others. I can’t find a signup link now, but I do recommend them to anyone who wants to learn more about DNS.