Things I wish were represented in RPGs but never are

You’re running low on water and you know it’s not too many miles ahead but you have none now.

I am wearing everything I own and I’m still cold.

Why won’t this goddamn tinder catch.

This fucking axe is sharper than a razor and harder than a diamond, how did that hit just bounce off that goddamn tree with nary a nick.

Is it going to rain? Fuck it’s going to rain. But like, in an hour? Or 3?

I’m in great shape and fit, why are my thighs rubbing in an uncomfortable way.

Are we near the ridge? We should be. How much longer. Maybe an hour? Or 3?

My clothes are so wet from sweat, when the sun goes down, I will probably die of exposure. But FUCK I can’t get a fire started! (See above)

WHAT the FUCK was that NOISE?

Epilogue: well, that sucked

I got it working! I wrote a bunch of scripts to wrap Watchman and my workflow. It all behaved precisely to plan!

Except for one thing: our pre-commit hooks.

We have a thing for code quality, and so we have a HUGE set of Git hooks. The biggest are our pre-commit hooks, which check a whole slew of things. From brace style to cyclomatic complexity, they really give the code a deep look.

The problem is, our environment uses mod_perl. Mod_perl doesn’t ship with macOS, and doesn’t compile cleanly/easily, esp. if you’re using Perlbrew to run an old-ass Perl.

Some of our commit checks rely on Apache2:: stuff to run.

Sadface. Others, more experienced than I, ran into similar issues, and bailed, too.

So I gave up. For the time being, my workflow will remain shitty, impractical, and inefficient. There is probably a way. It’s just not low-hanging fruit.

Remote vs Local Dev: HALP!

For various complex reasons, it is currently not practical to run our application on a laptop. (Primarily but not limited to: there isn’t a “subset” of the large database, the web-heads don’t have a simple configuration, and much of the subset of CPAN we use doesn’t always easily/quickly compile on Windows or OSX)

SO! Everyone gets 1 or more development VMs upon which to do their development.

Which is great except our current development platform is Centos 5. So that awesome Macvim config you carefully cultivated at your last gig? Doesn’t work.

Most of the devs do one of three things:

  1. Run a Samba server on their devs, and mount
  2. Run FUSE SSFS, and mount
  3. Use an editor that works entirely by remote

UGH. I’ve been doing that but it’s not without problems: I have to sync my configs everywhere, making sure to write stuff that works in Mac OS X and Linux; running Git – especially really great tools like Sourcetree – isn’t feasible locally, so you’re limited to doing everything on the command line; and you are constantly losing connections when you switch networks (even if your machine sleeps when you’re having a long lunch). You lose local indexing, you lose all the awesome tool chain I’ve built up over the past few years.

So I have an idea. I remembered Facebook’s Watchman, a file-watching thingee. Why not just keep a local Git repo/working tree, and have it “ship” changes as they happen to my dev?

After all, the devs are configured to watch for file changes, and if they get unruly we have a quick command to bounce them. The amount of front-end I do is limited – I can basically watch .pm and .pl files only, and there is no build step once delivered to the dev. 

Am I insane? Does this sound remotely plausible? Should I just develop in a simple, unadorned xterm and stop using nice, modern tools? What does everyone else do in this situation?

(Addendum: in theory this plan will work transparently over VPN too, without possible latency gripes like a networked file system, I guess?)

White

If my recent job-hunting experience taught me one thing, it’s that I’m never again doing whiteboard coding to try to get a job. Whiteboard coding is beyond stupid and companies that practice it – use it a core indicator of ability – are at best misguided, at worst broadly unconcerned with software quality.

There is an exception, I suppose: whiteboarding a block diagram of something – “design me an N-tier web service for foobar blah etc” – is a potentially useful exercise. It is entirely relevant to things programmers, devops, and others do every day. A conversation about lots of useful things can result from that.

But coding? Writing a reasonable facsimile of working code off the top of your head, without a single reference, without the ability to drop into a REPL or gin up a test case or 3? No man pages, no Stack Overflow, no quick convo in the Slack channel, no hallway meeting? That’s how code gets written in the real world. That’s what you’re testing in a candidate.

Give a take-home assignment that tests their actual ability to write code. Bring them in for a code review, discuss software quality  and design. If you absolutely MUST make them do a live test, let them use their laptop and preferred environment. (You probably DON’T need to do this, though.)

I feel reasonably comfortable saying that the first thing I think I will ask a prospective employer is, “Do you require whiteboard coding in the interview?”. An affirmative answer is a clear sign not to move forward.

The whiteboard code test: not even once.

Come and Go

Take your mind back to around 2005-2006. The world was very simple: the “enterprise” used a Java stack or a Windows stack, and the rest of the world used … whatever. Mostly, the non-enterprise world was Perl, Python or PHP, with a smattering of weird stuff here and there.

Ruby hit it big with Rails around 2006, and you could not read a single tech blog or news aggregator with references to Rails or `gem install`. Everyone was cranking out code into the green-field Ruby ecosystem. The masses of tech punditry wrote terabytes of text about how Rails was going to infiltrate the enterprise soon, and how in-demand Ruby and especially Rails programmers (now called “ninjas” and “rockstars”) were.

It never happened, of course. Rails remains a popular and excellent choice for a wide array of applications on the web, but its largest users (eg, Twitter) have since abandoned it. They did so for many reasons: some good, like performance, and some not-so-good, like …

In 2009 a weird fellow called Ryan Dahl released a weird little hack that mated the Chrome JS runtime with a low-level I/O API. The result was Node.js. Now you can run Javascript – more or less the same JS you ran in the browser – **outside** the browser. You can write web servers in it! You can have your *front end developers* work on the back-end apps, because they all understand Javascript!

Well everyone dropped their in-progress Rails app like a hot rock and picked up node, especially after the 2011 release of npm. Here was a brand-new green-field package-manager ecosystem for early adopters to stake a name and a claim.

You couldn’t read a single tech blog or news aggregator with references to ExpressJS or `npm install`. Everyone was cranking out code into the green-field Node.js ecosystem. The masses of tech punditry wrote terabytes of text about how Node.js was going to infiltrate the enterprise soon, and how in-demand Javascript and especially “full stack” programmers (adopting the mantle of “ninjas” and “rockstars”) were.

It never happened, of course. Node.js remains a popular and excellent choice for a wide array of applications on the web, but its largest users (eg, TJ Holowaychuk) have since abandoned it. They did so for many reasons: some good, like performance, and some not-so-good, like …

In 2009 a bunch of programmers at Google decided they wanted a better way to write the C/C++ programs that drive the core of Google’s stack. The result was Go …

**And on and on and on**. You can play this stupid game all the way back to ENIAC.

If the language your tech team is pushing is the one on top of the blogs and news aggregators, there’s a non-zero chance they are making the decision in part of not wholly because of perceived pressure to be “current”.

Make no mistake: everyone is guilty of this. Moreover, each of these “foo is better than bar” events **does** move the bar forward in some way. Not that Go is better than Node which is better than Ruby, but each has a chance to address flaws inherent in the design of earlier programs.

That said, I often find what is now often referred to as “Hacker News-driven development” to be far more difficult and dangerous than just doing the hard work of fixing bugs. “But Foo has far greater support for concurrency!” is a great argument if your metrics show concurrency is a problem; it’s sloppy thinking if you have no users and just want a new toy.

Moreover, HN-driven development often results in big blind spots with the platforms you have. A great example is Python. A lot of people went from Python 2.x to Go, citing bugs, performance problems, or a need for greater concurrency, without ever testing out the asyncio stuff in Python, or checking to see if their pet bugs were fixed. Go was a shiny new toy with the intellectual thrill of learning a new thing; porting to Python 3 was just hard, possibly boring work. Who needs *that* shit?

Yes, I think Go is a bad language that ignores decades of programming language research to solve a specific problem Google has. Google’s massive rep as tech innovator makes everyone think it’s good: after all, Google uses it!

But the point of this rant is to not criticize Go, it’s to illustrate that “Go is going to solve all our problems!” is a bunch of bullshit, because the most likely scenario is in 2-3 years the newest, hippest platform (an out-of-left-field Perl resurgence? Elm? INTERCAL?) will suddenly become the solution to all our problems.

Theory: tech doesn’t matter

Unpopular opinion: for the bulk of projects/applications, lower-level platform decisions don’t matter. 

Picking Go over Python won’t mean anything if your team is happier with Python, for example. And I likewise posit that the “we switched to foo and saw huge gains!” is in part just the nerd-boner many teams get when they 1)have new toys and 2)get to play with the crap they keep reading about on HN.

Some choices will always matter but those are often obvious. You can’t use Redis as an RDBMS and all the enthusiasm in the world won’t fix that. But there are usually so few of those decisions in a project. 

I don’t like Go

As long as I’m complaining …

I really don’t like Go. I mean, really, profoundly, strongly, whatever. I think it’s very bad and most of its proponents are so very, very wrong.

Here’s why.

I don’t write C programs. Go is often called things like “C for the 21st century”. Great! Cool! That’s awesome!

But I don’t write C programs. Why would I want a better C when I don’t need C to begin with?

To be sure: if I had to write a C program, I’d probably go straightaway to Go. And why not? It’s a better C, so picking it is obvious. I’d only pick C if the C program I needed to write is, like, a language interpreter or some embedded thing or something on a platform Go doesn’t exist on.

But I don’t have to write C programs. I get by just fine writing Python programs, and Javascript programs, and shell scripts, and whatever else random nonsense shows up in my TODO.

But fast, concurrent network servers! Go people are quick to point out. Yeah, yeah, yeah: I have a hand-motion for that. Your stupid app doesn’t have any users, and concurrency is not a simple magic incantation. You probably don’t even know where the bottlenecks in your app are, so your breathless yammering about concurrency is pointless. Blab about it at the next meetup; I don’t have time.

But strong typing!! Yeah, well. My life as a web programmer consists of ints and strings; I guess I am just lucky. Sometimes I group them into lists or maps, but really, ints and strings, dude. I don’t need typing enforced at compile-time for we date field of a web form.

Now, a quick point: I write a lot of intranet-type apps. They are for specific users and not exposed on the filthy internet. That means I rarely have to worry about fuzzing my stuff, malicious input, I can demand a specific browser if I want to employ some feature, etc. I have it good. If I couldn’t get away with all that, I’d be more in favor of strong typing.

And as long as we’re complaining about strong typing, point the second: I’ve been doing this long enough that I can spot a fad from a million miles away. 5 years ago we didn’t need strong typing, we just needed better tests and CI. Before that, we did (Java). Before that, we didn’t (Perl). And so on, and so on.

Go and strong-typing proponents are just another fad. Strong typing, like TDD and Agile and a zillion other programming methodologies, are great. Wonderful. Seen half a dozen come and go in my day (remember XP? We didn’t need tests OR strong typing back then!).

Telling me the only way to write correct programs is X is a big BFD from me. I suck, you suck, we all suck.

I could go on but dinner’s almost ready, so what better time than to end this little rant. I don’t like Go. The end.