It took a lot of planning, prototyping, testing and trial runs, but we’ve finally completed our migration over to amazon/aws.
What does this mean for you? It means significantly lower chances of down-time (our systems are waaaaay more fault-tolerant now), plus tighter security (we were always secure, but now even more so).
On our end we get to breathe easier knowing that our systems are fully redundant and scalable to meet demand, and we can start deploying new updates with confidence.
The rest of this post is a bit inside baseball, but I thought I’d share some details as we get asked about our setup quite often in support-land and at tech meetups.
About a year ago we began bumping up against walls that impeded our ability to move forward. These fell in to three broad categories: scalability, agility, and (as a result) marketing. Arguably these are good problems for a company to have (well, maybe not the marketing part), and while we had theoretical solutions for everything, the practical matter of implementation took quite a bit of time, a lot of learning and a few leaps of faith.
On the infrastructure front, our production architecture at Linode, while hyper-optimized (running arch+nginx), did not lend itself to easy scalability. Given a team of sysadmins we could have overcome this, but we’re app developers at heart. Enter aws: eb, ec2, rds, ses, s3 and route53 to save the day. While we still have a few things to tweak, we couldn’t be happier with the move. Major props to the boys at TriNimbus for giving us a hand.
On the agility front, our inability to efficiently work on isolated feature development was killing us. Branching-and-merging is not an area where svn excels. We’re now git across the board and cannot believe we waited so long to switch. We still use beanstalk for our repo origins because, well, they’re awesome. On another note, we made the mistake, in hindsight, of outsourcing some of our mobile app development. We were running low on internal man-power (as our time was getting sucked into putting out fires), but in the end having a portion of our development outsourced really killed our iterative release cycles. As we were unable to synchronously release feature updates across all platforms we got stuck in the mud. So we’re bringing development back in-house. All of it.
On the marketing front, being in a scalability bind we weren’t in a position to bring hoards of new users into the mix — though we did welcome thousands of new users through organic / word-of-mouth referrals. Now that we have the infrastructure in place to support rapid growth, we’ll be kicking things up a notch to more aggressively get the word out.
All of which is to say that it took the better part of a year to dig ourselves out of the hole we made for ourselves. Building good software and running a solid business both involve continual improvements and course-corrections, and it’s nice to be back on a path with daylight again. We’re all pretty stoked for what comes next.