For various complex reasons, it is currently not practical to run our application on a laptop. (Primarily but not limited to: there isn’t a “subset” of the large database, the web-heads don’t have a simple configuration, and much of the subset of CPAN we use doesn’t always easily/quickly compile on Windows or OSX)
SO! Everyone gets 1 or more development VMs upon which to do their development.
Which is great except our current development platform is Centos 5. So that awesome Macvim config you carefully cultivated at your last gig? Doesn’t work.
Most of the devs do one of three things:
- Run a Samba server on their devs, and mount
- Run FUSE SSFS, and mount
- Use an editor that works entirely by remote
UGH. I’ve been doing that but it’s not without problems: I have to sync my configs everywhere, making sure to write stuff that works in Mac OS X and Linux; running Git – especially really great tools like Sourcetree – isn’t feasible locally, so you’re limited to doing everything on the command line; and you are constantly losing connections when you switch networks (even if your machine sleeps when you’re having a long lunch). You lose local indexing, you lose all the awesome tool chain I’ve built up over the past few years.
So I have an idea. I remembered Facebook’s Watchman, a file-watching thingee. Why not just keep a local Git repo/working tree, and have it “ship” changes as they happen to my dev?
After all, the devs are configured to watch for file changes, and if they get unruly we have a quick command to bounce them. The amount of front-end I do is limited – I can basically watch .pm and .pl files only, and there is no build step once delivered to the dev.
Am I insane? Does this sound remotely plausible? Should I just develop in a simple, unadorned xterm and stop using nice, modern tools? What does everyone else do in this situation?
(Addendum: in theory this plan will work transparently over VPN too, without possible latency gripes like a networked file system, I guess?)