If my recent job-hunting experience taught me one thing, it’s that I’m never again doing whiteboard coding to try to get a job. Whiteboard coding is beyond stupid and companies that practice it – use it a core indicator of ability – are at best misguided, at worst broadly unconcerned with software quality.

There is an exception, I suppose: whiteboarding a block diagram of something – “design me an N-tier web service for foobar blah etc” – is a potentially useful exercise. It is entirely relevant to things programmers, devops, and others do every day. A conversation about lots of useful things can result from that.

But coding? Writing a reasonable facsimile of working code off the top of your head, without a single reference, without the ability to drop into a REPL or gin up a test case or 3? No man pages, no Stack Overflow, no quick convo in the Slack channel, no hallway meeting? That’s how code gets written in the real world. That’s what you’re testing in a candidate.

Give a take-home assignment that tests their actual ability to write code. Bring them in for a code review, discuss software quality  and design. If you absolutely MUST make them do a live test, let them use their laptop and preferred environment. (You probably DON’T need to do this, though.)

I feel reasonably comfortable saying that the first thing I think I will ask a prospective employer is, “Do you require whiteboard coding in the interview?”. An affirmative answer is a clear sign not to move forward.

The whiteboard code test: not even once.

Come and Go

Take your mind back to around 2005-2006. The world was very simple: the “enterprise” used a Java stack or a Windows stack, and the rest of the world used … whatever. Mostly, the non-enterprise world was Perl, Python or PHP, with a smattering of weird stuff here and there.

Ruby hit it big with Rails around 2006, and you could not read a single tech blog or news aggregator with references to Rails or `gem install`. Everyone was cranking out code into the green-field Ruby ecosystem. The masses of tech punditry wrote terabytes of text about how Rails was going to infiltrate the enterprise soon, and how in-demand Ruby and especially Rails programmers (now called “ninjas” and “rockstars”) were.

It never happened, of course. Rails remains a popular and excellent choice for a wide array of applications on the web, but its largest users (eg, Twitter) have since abandoned it. They did so for many reasons: some good, like performance, and some not-so-good, like …

In 2009 a weird fellow called Ryan Dahl released a weird little hack that mated the Chrome JS runtime with a low-level I/O API. The result was Node.js. Now you can run Javascript – more or less the same JS you ran in the browser – **outside** the browser. You can write web servers in it! You can have your *front end developers* work on the back-end apps, because they all understand Javascript!

Well everyone dropped their in-progress Rails app like a hot rock and picked up node, especially after the 2011 release of npm. Here was a brand-new green-field package-manager ecosystem for early adopters to stake a name and a claim.

You couldn’t read a single tech blog or news aggregator with references to ExpressJS or `npm install`. Everyone was cranking out code into the green-field Node.js ecosystem. The masses of tech punditry wrote terabytes of text about how Node.js was going to infiltrate the enterprise soon, and how in-demand Javascript and especially “full stack” programmers (adopting the mantle of “ninjas” and “rockstars”) were.

It never happened, of course. Node.js remains a popular and excellent choice for a wide array of applications on the web, but its largest users (eg, TJ Holowaychuk) have since abandoned it. They did so for many reasons: some good, like performance, and some not-so-good, like …

In 2009 a bunch of programmers at Google decided they wanted a better way to write the C/C++ programs that drive the core of Google’s stack. The result was Go …

**And on and on and on**. You can play this stupid game all the way back to ENIAC.

If the language your tech team is pushing is the one on top of the blogs and news aggregators, there’s a non-zero chance they are making the decision in part of not wholly because of perceived pressure to be “current”.

Make no mistake: everyone is guilty of this. Moreover, each of these “foo is better than bar” events **does** move the bar forward in some way. Not that Go is better than Node which is better than Ruby, but each has a chance to address flaws inherent in the design of earlier programs.

That said, I often find what is now often referred to as “Hacker News-driven development” to be far more difficult and dangerous than just doing the hard work of fixing bugs. “But Foo has far greater support for concurrency!” is a great argument if your metrics show concurrency is a problem; it’s sloppy thinking if you have no users and just want a new toy.

Moreover, HN-driven development often results in big blind spots with the platforms you have. A great example is Python. A lot of people went from Python 2.x to Go, citing bugs, performance problems, or a need for greater concurrency, without ever testing out the asyncio stuff in Python, or checking to see if their pet bugs were fixed. Go was a shiny new toy with the intellectual thrill of learning a new thing; porting to Python 3 was just hard, possibly boring work. Who needs *that* shit?

Yes, I think Go is a bad language that ignores decades of programming language research to solve a specific problem Google has. Google’s massive rep as tech innovator makes everyone think it’s good: after all, Google uses it!

But the point of this rant is to not criticize Go, it’s to illustrate that “Go is going to solve all our problems!” is a bunch of bullshit, because the most likely scenario is in 2-3 years the newest, hippest platform (an out-of-left-field Perl resurgence? Elm? INTERCAL?) will suddenly become the solution to all our problems.

Theory: tech doesn’t matter

Unpopular opinion: for the bulk of projects/applications, lower-level platform decisions don’t matter. 

Picking Go over Python won’t mean anything if your team is happier with Python, for example. And I likewise posit that the “we switched to foo and saw huge gains!” is in part just the nerd-boner many teams get when they 1)have new toys and 2)get to play with the crap they keep reading about on HN.

Some choices will always matter but those are often obvious. You can’t use Redis as an RDBMS and all the enthusiasm in the world won’t fix that. But there are usually so few of those decisions in a project. 

I don’t like Go

As long as I’m complaining …

I really don’t like Go. I mean, really, profoundly, strongly, whatever. I think it’s very bad and most of its proponents are so very, very wrong.

Here’s why.

I don’t write C programs. Go is often called things like “C for the 21st century”. Great! Cool! That’s awesome!

But I don’t write C programs. Why would I want a better C when I don’t need C to begin with?

To be sure: if I had to write a C program, I’d probably go straightaway to Go. And why not? It’s a better C, so picking it is obvious. I’d only pick C if the C program I needed to write is, like, a language interpreter or some embedded thing or something on a platform Go doesn’t exist on.

But I don’t have to write C programs. I get by just fine writing Python programs, and Javascript programs, and shell scripts, and whatever else random nonsense shows up in my TODO.

But fast, concurrent network servers! Go people are quick to point out. Yeah, yeah, yeah: I have a hand-motion for that. Your stupid app doesn’t have any users, and concurrency is not a simple magic incantation. You probably don’t even know where the bottlenecks in your app are, so your breathless yammering about concurrency is pointless. Blab about it at the next meetup; I don’t have time.

But strong typing!! Yeah, well. My life as a web programmer consists of ints and strings; I guess I am just lucky. Sometimes I group them into lists or maps, but really, ints and strings, dude. I don’t need typing enforced at compile-time for we date field of a web form.

Now, a quick point: I write a lot of intranet-type apps. They are for specific users and not exposed on the filthy internet. That means I rarely have to worry about fuzzing my stuff, malicious input, I can demand a specific browser if I want to employ some feature, etc. I have it good. If I couldn’t get away with all that, I’d be more in favor of strong typing.

And as long as we’re complaining about strong typing, point the second: I’ve been doing this long enough that I can spot a fad from a million miles away. 5 years ago we didn’t need strong typing, we just needed better tests and CI. Before that, we did (Java). Before that, we didn’t (Perl). And so on, and so on.

Go and strong-typing proponents are just another fad. Strong typing, like TDD and Agile and a zillion other programming methodologies, are great. Wonderful. Seen half a dozen come and go in my day (remember XP? We didn’t need tests OR strong typing back then!).

Telling me the only way to write correct programs is X is a big BFD from me. I suck, you suck, we all suck.

I could go on but dinner’s almost ready, so what better time than to end this little rant. I don’t like Go. The end.

Why I don’t like Emacs

For reference, see http://elephly.net/posts/2016-02-14-ilovefs-emacs.html

To hit a few points in no particular order:

All those modes and you allow 2. It is completely normal for me to be editing an HTML5 document that is in reality a Jinja2 template that might contain snippets of JavaScript, including new-fangled JS tools like Angular 2 and React.

Despite following dozens of tutorials about hacking together a dozen different major and minor modes and fiddling with hundreds of lines of elisp, I have never once gotten this to work correctly. God forbid one day I went back to something like PHP/Hacklang and that example above got yet another language thrown in.

This isn’t contrived, either. This works for me in Vim, this works for me in the Intellij family, this works for me in Komodo, this works for me in Atom, this works for me in Sublime. It works more-or-less out of the box in everything but Emacs.

I like my cigar but I take it out of my mouth every now and then. Because you can write some-app-client in elisp doesn’t mean that it’s the best at it.

So for example, I wouldn’t give up Tweetbot, which is a far better Twitter client than anything in Emacs, just so I could … I dunno, not command-tab over to Tweetbot? I’m unclear how that’s an onerous burden.

Similarly my email app is just … plain ol’ Gmail. It gets features ahead of IMAP clients, it’s pretty stable and predictable, and work pays for it (we have Apps). It’s one less thing to break. That gives me more time to do my job.

Pretty much everyone will admit `man` is horrible. I tend to read man pages in the browser.

And so on, and so on. I use the best app available for the task. I’m super glad Org Mode works for you. I tend to think it’s not the best tool around.

And so on, and so on.

Do one thing well. I think Emacs – as a creation of GNU, steeped in Lisp Machine tradition – is in fact anti-Unix. It does lots and lots and lots of things. It’s in no way a Unix tool; it’s a Lisp Machine tool someone ported to Unix after the Lisp Machine renaissance faded.

Instead of doing one thing well, it tries to model itself after the MIT AI lab model of thinking, that begat the Lisp machines: be completely programmable.

That’s not a sin per se but to uphold it as somehow Unixy is. vi – and not Vim, which in this regard bears a certain amount of sin as well – is a “proper” Unix text editor. It barely does anything because you can just pipe your file through 86 different command line tools to get the output you want – compiled software, a web page, whatever.

That’s Unix.


I know when I’m beat. Back to the evil corporate arms of WordPress.


Your stupid TODO list app is not “world changing”. Stop saying that.

One of the classics in the stream of unending stupid that comprises the standard-issue Silicon Valley groupthink is that a product or service is “world changing”. No it is not. By the time your revolutionary service-as-a-platform-as-a-service micro-macro-blog mashup has started looking for Series A, everything world-changing has already happened. I assume that people say this dumb shit because, as they are standing on the shoulders of giants, they’re suffering from hypoxia.

Consider Twitter. Twitter is held as yet another example in Silicon Valley lore of “small thing changes world”. Given as examples are how the term “hashtag” has deeply permeated pop culture, or its role in actual revolutions (Iran student uprising, Arab Spring, Libya). “Obviously”, the thinking goes, “if it gets used in something that literally changes the world, then it is by definition world-changing!”

Nope. You’re confusing things.

To continue to pick on Twitter, it is the end result of a very long, very un-sexy relentless march of progress. To make Twitter happen, we had to have a lot of things in place: a supply chain that puts hand-held radios in everyone’s hand. They have to be built in places where governments force wages down and keep regulation low. Raw materials have to be moved across the planet on cargo containers and by air. These assets have to be obsessively tracked and managed; even a small loss of raw materials or components can disrupt production. Once produced, we need to get them into the hands of users with the capital to buy them. We have to maintain the radio and telephony networks. We need bandwidth, peering, data centers. Data centers mean innovation in cooling and other physical-plant stuff. And on, and on. 

Without all that in place and working effectively, then Twitter wouldn’t work. Twitter is a side effect of the rest of “modern information society” or whatever you want to call it. Twitter – cleverly, no argument there – exploits a tiny niche at the very tail end of a massive amount of work.

No one ever really wants to do a startup to better improve supply chain performance. Why would they? It takes years to understand what a supply chain even is. It takes years to do lots of hard work to figure out what’s wrong, and testing your ideas can take a long time. It’s very hard work. Better to do a TODO list.

Better and easier to make predatory license agreements, or talk about your “incredible journey” after you’ve been acquihired by Google.