I do not understand PbtA at all.

I really, really want to understand Powered By The Apocalypse (PbtA) games, but I just don’t. I’m starting to think I’m either really dumb or so old that this New Fangled Way is beyond my old grognard/neckbeard skills.

Now, to be clear: the basic premise is easy to grasp. In “old school” RPGs, you are presented with a challenge: say, an orc guarding a door. The players and GM then shift into a semi-simulation of the fight. In the classic method, each side (in PbtA parlance) “makes moves”, or executes various rules systems, with the roll of the dice standing in for the randomness and chaos of the real world. GM rolls well, you get hit and take damage; you roll well, the orc dies (or you sneak by him or you con him into opening the door, or whatever).

In my day (rattles cane) if you – the GM – got a roll you didn’t like, well, you were probably rolling behind a screen, so you’d just fudge it. Maybe killing a PC right now was the wrong thing to do, or maybe the player did a really funny or interesting job pretending to fast-talk the orc; whatever. The point is, you made the call and went on with the game.

In PbtA, all that is … simplified? Streamlined? Systematized? All of the above? I get it: the PCs have moves that are analogous to the volume upon volume of rules in classic systems. “We are going to sneak past the orc”, they say, and they roll their “Sneak Past Stuff” move. If they succeed, they succeed; and if they fail, or partially succeed, then something appropriate happens.

This bit, I get. This is nearly exactly identical to the old-school way, just clarified and formalized. It works. I get it.

What I don’t get are the constant series of meta games that PbtA systems seem to want to introduce, to the point of having hundreds of pages of rules for … frankly, I don’t even know.

Consider ‘Blades in the Dark’. It consists of a “free play” mode and a “score” mode and a “downtime” mode. Uh, ok? It has “flashback scenes” and all sorts of other, strange modes of interacting that involve different types of rolls and systems.

[Sidebar: I’m told that Blades in the Dark is a pretty “advanced” hack of PbtA, and possibly not ideal for a first play-through. “Just play DW”, they say. To which I say, I should not have to have a fundamental grounding in “Chutes and Ladders” before taking on Advanced Squad Leader.]

As I understand it, the modes of play are roughly like this:

  1. Free play: “We go talk to a dude about a door we heard we want to open.”
  2. Score: OK, roll to open the door. Roll to get what is behind the door. Roll to carry off things. Roll to escape.
  3. Downtime: roll to fence the goods. Roll to heal. Role play all of this.

That needs … systems? And I’m leaving out flashbacks and numerous other pieces of minutiae I can’t even remember.

The list goes on, and on. There’s a bunch of abstractions I can’t wrap my head around. Combat in PbtA has a strange set of moving parts I can’t get. I get not detailing each swing of a sword or pull of a trigger, but at times it feels like a flip of a 3-sided coin (you take a hit! you don’t take a hit! you sort of do or don’t!).

I could keep rambling on, but I think you get the idea. Am I just over-thinking this? Am I too grounded in old-school D&D?

Things I wish were represented in RPGs but never are

You’re running low on water and you know it’s not too many miles ahead but you have none now.

I am wearing everything I own and I’m still cold.

Why won’t this goddamn tinder catch.

This fucking axe is sharper than a razor and harder than a diamond, how did that hit just bounce off that goddamn tree with nary a nick.

Is it going to rain? Fuck it’s going to rain. But like, in an hour? Or 3?

I’m in great shape and fit, why are my thighs rubbing in an uncomfortable way.

Are we near the ridge? We should be. How much longer. Maybe an hour? Or 3?

My clothes are so wet from sweat, when the sun goes down, I will probably die of exposure. But FUCK I can’t get a fire started! (See above)

WHAT the FUCK was that NOISE?

Epilogue: well, that sucked

I got it working! I wrote a bunch of scripts to wrap Watchman and my workflow. It all behaved precisely to plan!

Except for one thing: our pre-commit hooks.

We have a thing for code quality, and so we have a HUGE set of Git hooks. The biggest are our pre-commit hooks, which check a whole slew of things. From brace style to cyclomatic complexity, they really give the code a deep look.

The problem is, our environment uses mod_perl. Mod_perl doesn’t ship with macOS, and doesn’t compile cleanly/easily, esp. if you’re using Perlbrew to run an old-ass Perl.

Some of our commit checks rely on Apache2:: stuff to run.

Sadface. Others, more experienced than I, ran into similar issues, and bailed, too.

So I gave up. For the time being, my workflow will remain shitty, impractical, and inefficient. There is probably a way. It’s just not low-hanging fruit.

Remote vs Local Dev: HALP!

For various complex reasons, it is currently not practical to run our application on a laptop. (Primarily but not limited to: there isn’t a “subset” of the large database, the web-heads don’t have a simple configuration, and much of the subset of CPAN we use doesn’t always easily/quickly compile on Windows or OSX)

SO! Everyone gets 1 or more development VMs upon which to do their development.

Which is great except our current development platform is Centos 5. So that awesome Macvim config you carefully cultivated at your last gig? Doesn’t work.

Most of the devs do one of three things:

  1. Run a Samba server on their devs, and mount
  2. Run FUSE SSFS, and mount
  3. Use an editor that works entirely by remote

UGH. I’ve been doing that but it’s not without problems: I have to sync my configs everywhere, making sure to write stuff that works in Mac OS X and Linux; running Git – especially really great tools like Sourcetree – isn’t feasible locally, so you’re limited to doing everything on the command line; and you are constantly losing connections when you switch networks (even if your machine sleeps when you’re having a long lunch). You lose local indexing, you lose all the awesome tool chain I’ve built up over the past few years.

So I have an idea. I remembered Facebook’s Watchman, a file-watching thingee. Why not just keep a local Git repo/working tree, and have it “ship” changes as they happen to my dev?

After all, the devs are configured to watch for file changes, and if they get unruly we have a quick command to bounce them. The amount of front-end I do is limited – I can basically watch .pm and .pl files only, and there is no build step once delivered to the dev. 

Am I insane? Does this sound remotely plausible? Should I just develop in a simple, unadorned xterm and stop using nice, modern tools? What does everyone else do in this situation?

(Addendum: in theory this plan will work transparently over VPN too, without possible latency gripes like a networked file system, I guess?)

White

If my recent job-hunting experience taught me one thing, it’s that I’m never again doing whiteboard coding to try to get a job. Whiteboard coding is beyond stupid and companies that practice it – use it a core indicator of ability – are at best misguided, at worst broadly unconcerned with software quality.

There is an exception, I suppose: whiteboarding a block diagram of something – “design me an N-tier web service for foobar blah etc” – is a potentially useful exercise. It is entirely relevant to things programmers, devops, and others do every day. A conversation about lots of useful things can result from that.

But coding? Writing a reasonable facsimile of working code off the top of your head, without a single reference, without the ability to drop into a REPL or gin up a test case or 3? No man pages, no Stack Overflow, no quick convo in the Slack channel, no hallway meeting? That’s how code gets written in the real world. That’s what you’re testing in a candidate.

Give a take-home assignment that tests their actual ability to write code. Bring them in for a code review, discuss software quality  and design. If you absolutely MUST make them do a live test, let them use their laptop and preferred environment. (You probably DON’T need to do this, though.)

I feel reasonably comfortable saying that the first thing I think I will ask a prospective employer is, “Do you require whiteboard coding in the interview?”. An affirmative answer is a clear sign not to move forward.

The whiteboard code test: not even once.

Come and Go

Take your mind back to around 2005-2006. The world was very simple: the “enterprise” used a Java stack or a Windows stack, and the rest of the world used … whatever. Mostly, the non-enterprise world was Perl, Python or PHP, with a smattering of weird stuff here and there.

Ruby hit it big with Rails around 2006, and you could not read a single tech blog or news aggregator with references to Rails or `gem install`. Everyone was cranking out code into the green-field Ruby ecosystem. The masses of tech punditry wrote terabytes of text about how Rails was going to infiltrate the enterprise soon, and how in-demand Ruby and especially Rails programmers (now called “ninjas” and “rockstars”) were.

It never happened, of course. Rails remains a popular and excellent choice for a wide array of applications on the web, but its largest users (eg, Twitter) have since abandoned it. They did so for many reasons: some good, like performance, and some not-so-good, like …

In 2009 a weird fellow called Ryan Dahl released a weird little hack that mated the Chrome JS runtime with a low-level I/O API. The result was Node.js. Now you can run Javascript – more or less the same JS you ran in the browser – **outside** the browser. You can write web servers in it! You can have your *front end developers* work on the back-end apps, because they all understand Javascript!

Well everyone dropped their in-progress Rails app like a hot rock and picked up node, especially after the 2011 release of npm. Here was a brand-new green-field package-manager ecosystem for early adopters to stake a name and a claim.

You couldn’t read a single tech blog or news aggregator with references to ExpressJS or `npm install`. Everyone was cranking out code into the green-field Node.js ecosystem. The masses of tech punditry wrote terabytes of text about how Node.js was going to infiltrate the enterprise soon, and how in-demand Javascript and especially “full stack” programmers (adopting the mantle of “ninjas” and “rockstars”) were.

It never happened, of course. Node.js remains a popular and excellent choice for a wide array of applications on the web, but its largest users (eg, TJ Holowaychuk) have since abandoned it. They did so for many reasons: some good, like performance, and some not-so-good, like …

In 2009 a bunch of programmers at Google decided they wanted a better way to write the C/C++ programs that drive the core of Google’s stack. The result was Go …

**And on and on and on**. You can play this stupid game all the way back to ENIAC.

If the language your tech team is pushing is the one on top of the blogs and news aggregators, there’s a non-zero chance they are making the decision in part of not wholly because of perceived pressure to be “current”.

Make no mistake: everyone is guilty of this. Moreover, each of these “foo is better than bar” events **does** move the bar forward in some way. Not that Go is better than Node which is better than Ruby, but each has a chance to address flaws inherent in the design of earlier programs.

That said, I often find what is now often referred to as “Hacker News-driven development” to be far more difficult and dangerous than just doing the hard work of fixing bugs. “But Foo has far greater support for concurrency!” is a great argument if your metrics show concurrency is a problem; it’s sloppy thinking if you have no users and just want a new toy.

Moreover, HN-driven development often results in big blind spots with the platforms you have. A great example is Python. A lot of people went from Python 2.x to Go, citing bugs, performance problems, or a need for greater concurrency, without ever testing out the asyncio stuff in Python, or checking to see if their pet bugs were fixed. Go was a shiny new toy with the intellectual thrill of learning a new thing; porting to Python 3 was just hard, possibly boring work. Who needs *that* shit?

Yes, I think Go is a bad language that ignores decades of programming language research to solve a specific problem Google has. Google’s massive rep as tech innovator makes everyone think it’s good: after all, Google uses it!

But the point of this rant is to not criticize Go, it’s to illustrate that “Go is going to solve all our problems!” is a bunch of bullshit, because the most likely scenario is in 2-3 years the newest, hippest platform (an out-of-left-field Perl resurgence? Elm? INTERCAL?) will suddenly become the solution to all our problems.

Theory: tech doesn’t matter

Unpopular opinion: for the bulk of projects/applications, lower-level platform decisions don’t matter. 

Picking Go over Python won’t mean anything if your team is happier with Python, for example. And I likewise posit that the “we switched to foo and saw huge gains!” is in part just the nerd-boner many teams get when they 1)have new toys and 2)get to play with the crap they keep reading about on HN.

Some choices will always matter but those are often obvious. You can’t use Redis as an RDBMS and all the enthusiasm in the world won’t fix that. But there are usually so few of those decisions in a project.