Smart Disorganized (incoming)

March 02, 2015

Fog Creek – Interview with Salvatore Sanfilippo

#wrapperb { width: 100%; overflow: hidden; } #firstb { width: 70%; float:left; } #secondb { padding-left: 30px; overflow: hidden; } body .content .post .entry #secondb img, .previous img, .previous { border: none !important; box-shadow: none !important; } .little { font-size: 75% } .previous span { display: inline; } table.previous, .previous tbody, .previous tr, .previous td { border: 0px !important; border-bottom: 0px !important; background: transparent; text-align: center; }

In, we chat with developers about their passion for programming: how they got into it, what they like to work on and how.

Today’s guest is Salvatore Sanfilippo, an Open-Source Software Developer at Pivotal, better known as the creator of Redis, the popular data structure server. He writes regularly about development on his blog.

Location: Catania, Sicily, Italy
Current Role: Developer at Pivotal

How did you get into software development?

My father was a computer enthusiast when I was a small child, so I started to write simple BASIC programs emulating him. My school had programming courses and it was great to experiment a bit with LOGO, but I definitely learned at home. The first computer I used to write small programs was a TI 99/4A, however, I wrote less-trivial programs only once I got a ZX Spectrum later. I got a bit better when I had the first MS-DOS computer, an Olivetti PC1, equipped with an 8086-clone called Nec V30.

I remember that in my little town there were free courses about programming in the public library. One day I went there to follow a lesson and learned about binary search. It was a great moment for me – the first non-trivial algorithm I learned.


Tell us a little about your current role

Pivotal is very kindly sponsoring me to write open-source software, specifically to work on Redis. So my typical day involves getting up at 7:15 am, taking my son and/or daughter to school, and then sitting in front of a computer from 8:00 am.

What I do then depends on what is urgent: handling the community, replying to messages, checking issues or pull requests on Github, fixing serious problems, or from time to time, designing and implementing new features.

I don’t have a fixed schedule. Since there aren’t many developers contributing to Redis, I’ve found that I can be most effective for the userbase if I’m flexible. Moreover, when I feel tired or annoyed, I use the trick of working on something about Redis that is exciting for me in a given moment. So I likely end the day with something good, even if it was a “bad day”, like when you slept too little for some reason and need a bit more motivation.

Currently I’m working on Redis 3.0.0, that is in release candidate stage right now. This release brings clustering to Redis, and it was definitely challenging. It’s non-trivial to find a set of tradeoffs in order to make a Redis Cluster that looks like Redis single-node, from the point of view of the use-cases you can model with it. In the end, I tried to overcome the limitations by sacrificing certain other aspects of the system, in favor of what looked like the fundamental design priorities.

I’m also working on a new open-source project currently – a distributed message queue. It is the first program I’ve written directly as a distributed system. I’m writing it because I see that many people use Redis as a message broker for delayed tasks, even if it was not designed for this specifically and there are other systems focusing on this task. So I asked myself ‘if Redis is not specifically designed for this, what makes people happy to use it compared to other systems?’ Maybe I can extract what is good about Redis and create a new system that is Redis-like but specifically designed for the task. So Disque was born (code not yet released currently).

When are you at your happiest whilst coding?

My preferred moment is when I write a lot of code, maybe without even trying to compile at all for days. Adding new abstractions, structures, functions, with an idea about how to combine all this to create something. Initially everything is a bit of a blur and you don’t really know if it’s going to work. But, eventually every missing piece starts to be filled with code, and the system makes sense, compiles, and suddenly you see it working on your computer. Something magical happened, something new was created where there was nothing a few days ago. I believe this is an aspect to love about programming.


What is your dev environment?

I use a MacBook Air 11″ and a desktop Linux system that I use mostly via SSH. I have an external monitor and keyboard that I attach to the MBA when I’m at home. I work at home in a room that is half office and half gym. I code sitting, doing small “sprints”, of 45 minutes to a max of 1.5 hours. Then I get up and either grab a coffee, exchange a few words with my Wife, go to collect one of my children or alike.

I don’t like music when I code, as I love the kind of music that needs pretty active listening, like Square Pusher, Venetian Snares, Classical music, and so forth. I don’t stop too late, maybe 6pm, but if I did not reach 8 hours of work, then I work again when my family goes to bed.

I use OS X on the MBA, with iTerm2 and the usual command line tools: Clang and Vim. On the Linux system, I use Ubuntu with GCC, Vim, and Valgrind. Vim and Valgrind are two fundamental tools I use every day.

From the point of view of desktop software, I use Google Chrome, Firefox, and Evernote to take a number of notes while I write code – kind of like a TODO list to avoid losing focus if I think about something the code needs.

I use Twitter a lot for work with the only Twitter client I actually like, and I wish it was more actively developed: YoruFukurou. For IRC, I use Limechat. For messaging, Telegram.

What are your favorite books about development?

A long time ago I enjoyed Structure and Interpretation of Computer Programs (SICP), but I never read it again. Books on algorithms were the ones that turned me from a Spaghetti-coder into something a bit better, probably. I’m not a fan of Best Practices, so I don’t like a lot of books like Design Patterns or The Pragmatic Programmer. Recently I re-read The Mythical Man-Month (TMMM) and The Design of Design, however, I find the latter less useful. I would love to read more actually detailed accounts of software project experiences and TMMM is more like that.


What technologies are you currently trying out?

I want an excuse to try Rust and Swift in some non-trivial software project. I don’t see Go as something that is going to serve as a “better C”, so I’m waiting for something new.

I’m not a big fan of new things in IT when they have the smell of being mostly fashion-driven, without actually bringing something new to the table. So I tend to not use the new, cool stuff unless I see some obvious value.

When not coding, what do you like to do?

Spending time with family, being a Father, is something I do and enjoy. I also try to go outside with my Wife without the children when possible, to have some more relaxed, fun together. I enjoy doing power-lifting and I also run from time to time. I like theatre – I used to watch many pieces, and recently I got interested in it again and I’m attending an acting laboratory.

What advice would you give to a younger version of yourself starting out in development?

Don’t fragment your forces into too many pieces. Focus on the intersection of things that at the same time you find valuable, and many people find valuable. Since niches are a bit of a trap.


Thanks to Salvatore for taking the time to speak with us. Have someone you’d like to be a guest? Let us know @FogCreek.


Previous Interviews

Jon Skeet
Bob Nystrom
Brian Bondy
Jared Parsons

by Gareth Wilson at March 02, 2015 11:44 AM

John Udell

Tag teams: The collaborative power of social tagging

My first experiences with social tagging, in services like Flickr and Delicious, profoundly changed how I think about managing shared online resources.

The freedom of an open-ended folksonomy, versus a controlled taxonomy, was exhilarating. But folksonomies can be, and often are, a complete mess. What really got my attention was the power that flows from disciplined use of tags, governed by thoughtful conventions. Given a set of resources -- photos in the case of Flickr, URLs in the case of Delicious -- you could envision a set of queries that you wanted to perform and use tags to enable those queries.

To read this article in full or to leave a comment, please click here

by Jon Udell at March 02, 2015 11:00 AM

Mark Bernstein



but it still must be said that his inflammatory and erroneous description of the situation is what caused all this nonsense in the first place. – Jimbo Wales, Wikipedia Chairman Emeritus

Select Commentary: The GuardianGawkerPandoDailyThe Mary SueWil WheatonDer Standardde VolkskrantDr. Clare HooperP. Z. MyersFayerWayerThink ProgressStacey MasonThe VergeHeise Der Verdieping TrouwProf. David MillardWired.deKIRO-FM Seattle (starts at 10:00) ❧ TechNewsWorldWashington PostPrismaticSocialText Neues DeutschlandViceEuropa (Madrid)El Fichero BustDaily OrangeOverlandArCompany

Good cause: App Camp For Girls. (Donations to Wikipedia are counter-productive. Give instead to a charity that assists women in computing, or victims of online harassment, not to punish Wikipedia but to repair the damage. App Camp For Girls has already raised $1200 from former Wikipedia donors; do tell them why you’re giving.)

March 02, 2015 04:58 AM

March 01, 2015

Mark Bernstein

Garment of Shadows

Mary Russell finds herself in Morocco. She has misplaced her memory; she can’t quite remember who she is or how she finds herself in Marrakech. We know (though she does not) that she has also misplaced her husband, Mr. Sherlock Holmes. Intrigue and action in 1920s North Africa ensues, with a lovely portrait of Hubert Lyautey, the French resident-general, and of his expert and capable majordomo, Youssef.

March 01, 2015 06:24 PM

Squiggle Birds

Dave Grey’s Squiggle Birds,

a five minute exercise to convince people that, yes, they can draw well enough.

March 01, 2015 05:49 PM

February 28, 2015

Erlang Factory

Erlang User Conference: Call for Talks open until 10 March

The conference will take place on 9-10 June. It will be followed by one day of tutorials on 11 June and 3 days of expert training on 11-13 June. 

We are currently accepting talk submissions for the Erlang User Conference 2014. If you have an interesting project you are working on or would like to share your knowledge, please submit your talk here. The deadline is 10th of March.

February 28, 2015 02:17 AM

ONE DAY ONLY: 20% off all training courses at Erlang Factory SF Bay Area Starts 6 am - Ends 11.59 pm PST on 23 Jan

Erlang Express - Robert Virding 3-5 March
Erlang for Development Support - Tee Teoh 3-5 March
Cowboy Express - Loïc Hoguin 3-5 March
OTP Express - Robert Virding 10-12 March
Brewing Elixir - Tee Teoh 10-12 March
Riak- Nathan Aschbacher 10-12 March
Kazoo - James Aimonetti 10-12 March

February 28, 2015 02:17 AM

Erlang Factory SF Bay Area : 40 Very Early Bird tickets released on 16 December

On Monday 16 December 2013, after 12:00 PST we will release a batch of 40 tickets at the Very Early Bird rate of $890. 

The new and improved website for the Erlang Factory San Francisco Bay Area is coming out on Monday, and it will feature a first batch of over 25 speakers.

February 28, 2015 02:17 AM

February 27, 2015

Dave Winer

Problem with Scripting News in Firefox?

I was working with Doc Searls this afternoon, and saw how Scripting News looks in the version of Firefox he has running on his laptop. It looks awful. One tab is visible all scrunched up in a corner of the window.

I have the latest Firefox on my Mac and it looks fine. All the tabs are where they are. If you're seeing the problem on your system and have any idea what the problem might be, please leave a comment below. It really bothers me that what Doc is seeing is so awful.

February 27, 2015 10:23 PM

Excuse the sales pitch

First, thank you for reading this blog.

Now I want to try to sell you on an idea.

The idea: Supporting the open web.

Everywhere you look things are getting siloized, but for some reason, I must be an idiot, but I keep making software that gives users the freedom to choose. If my software isn't the best for you, now or at any time in the future, you can switch to whatever you like.

I make it because I dream of a world where individuals have power over their own lives, and can help inform each other, and not be owned by companies who just want to sell them stuff they don't want or need.

I work hard. And I stay focused on what I do. But my ideas don't really get heard that much. And that's a shame, because if lots of people used my software, that would encourage more people to make software like it, and eventually we'd be back where we were not that long ago, with you in charge of your own web presence.

Anyway, I'm almost finished. I spent two years porting everything I care about to run in Node.js and in the browser. I don't plan to dig any more big holes. This is basically it.

What can you do? Well honestly, if you see something you think is empowering or useful here on my blog, please tell people about it. The tech press will not be covering it, so you can't find out about it that way. It would be a shame to have put all this effort into creating great exciting open software, only to have very few people ever hear about it.

Thanks again for reading my blog, and I hope to be making software for you and your friends for a long time to come.

PS: The place to watch, for the new stuff, the stuff that has the most potential of rewriting the rules of the web, is on my new liveblog. That's where I'm telling the story I think is so important.

February 27, 2015 04:51 PM

Mark Bernstein

Pardon The Dust

I’ve been busy as a beaver, modernizing the core classes that are the foundation of Tinderbox export templates. These have decent tests, but this weblog (especially its main page) is a key acceptance test; if you notice things that are not right, Email me. Thanks!

February 27, 2015 03:14 PM

Thumb on the Scale, Fingers in the Pie

Apparently, the Wikimedia Foundation has been caught red-handed, employing staff members to write puff pieces promoting its own philanthropy. Imagine that.

Meanwhile, the GamerGate people have petitioned to have me banished from Wikipedia because ArbCom forgot me, or because I keep looking at them funny, or because I’m really mean. Last week it was verse; this week, declamation.

Before this cunningly-contrived midnight trial in absentia concludes, perhaps I might review the choice that is offered here. On the one hand, you have an editor whose poor vocation as a knowledge seeker should be plain from his eight years of work here and his publications elsewhere. On the other, you have a barbarian horde of nameless trolls, openly colluding for months to exploit Wikipedia as part of a public relations campaign to threaten, shame, and punish women in computing.

“Next time she shows up at a conference we … give her a crippling injury that’s never going to fully heal … a good solid injury to the knees. I’d say a brain damage, but we don’t want to make it so she ends up too retarded to fear us.” -- Simon Parkin, “Zoe Quinn’s Depression Quest”, The New Yorker, 9 September 2014.

Wikipedia's official response has been ineffectual and infamous.

“I (and Wikipedia) neither support nor oppose [software developer Zoe] Quinn. Wikipedia is not a battleground.” – Jimmy Wales

Impartially to support or excuse a conspiracy notable only for threats of assault, rape, and murder, is to support those threats. Wikipedia can be a hobby or an entertainment, but for those against whom Wikipedia is weaponized it is neither. They cannot drop the stick and walk away; they can only submit to its repeated blows and hope that you will eventually raise your hand to restrain their assailants.

That’s the choice you have. But it’s not your choice alone: there are higher courts than yours, and in one tribunal you have already been taken to AN/I and sternly censured. With thought for Wikipedia's defenders and care for the damage Wikipedia has done, you can resolve to amend your behavior and return to productive membership in the community of ideas.

This is, of course, entirely consistent with -- and indeed mandated by -- Wikipedia's core principles. We are building an encyclopedia; we do not, and should not, employ that encyclopedia to attack blameless individuals, to intimidate people considering a potential career, or to improve the image of a so-called “movement.” Wikipedia is an encyclopedia, not a public-relations platform for the use of shadowy and shady causes. We are neutral, but that neutrality never extends to promoting falsehoods or excusing -- much less abetting -- criminal mischief. We follow sources; we never seek (as so many have been seeking on these pages) to "rebalance" them in light of an imaginary and universal conspiracy among the media. We seek consensus, which is incompatible with repeating the same failed proposals incessantly for months on end in the vain hope that something may have changed from the previous week, and with the fervent quest to sanction the five horsemen -- and me, and anyone else who stands in their way -- for defending the Wiki.

The problem is not insoluble or even difficult, but it does require resolve, hard work, and thorough sweeping. It’s time for you to choose.

February 27, 2015 03:11 PM

Fog Creek

A Developer’s Guide to Growth Hacking – Tech Talk


Given the media hype that surrounds the term ‘Growth Hacking’, you can be forgiven for dismissing the whole thing as another marketing buzzword. But what can get lost in the hubbub are some useful, development-inspired, working practices that can help a team focus on maximizing growth.

In this Tech Talk, Rob Sobers, Director of Inbound Marketing at Varonis, tells you all you need to know about Growth Hacking. Rob explains what Growth Hacking is and describes the processes key for it to be effective – from setting goals, to working through an experimentation cycle and how it works in practice.

Rob was formerly a Support Engineer here at Fog Creek, and is the creator of his own product, Munchkin Report. He writes on his blog about bootstrapping and startup marketing.


About Fog Creek Tech Talks

At Fog Creek, we have weekly Tech Talks from our own staff and invited guests. These are short, informal presentations on something of interest to those involved in software development. We try to share these with you whenever we can.


Content and Timings

  • What is Growth Hacking (0:00)
  • People (2:34)
  • Process (3:22)
  • Setting Goals (5:25)
  • Experimentation Cycle (6:12)
  • How It Works In Practice (12:03)



What is Growth Hacking

I was a developer, started out my career as a developer, kind of moved into the design space and then did customer support here, and then now I’m doing marketing. I’ve been doing marketing for the past, I don’t know, two and a half three years almost. This sort of like, phrase, growth hacker kind of cropped up. I kind of let the phrase pass me by. I just didn’t discuss it. I didn’t call myself a growth hacker. I stayed completely out of it, mainly because of stuff like this.

It’s just overwhelming. Like Google ‘growth hacking’, you’ll want to throw up. What it really comes down to is that growth hacking is not at all about tactics. It’s not about tricks. It’s not about fooling your customers into buying your software or finding some secret lever to pull that’s hidden that’s going to unlock massive growth for your company. It’s really about science. It’s about the process. It’s about discipline. It’s about experimentation. Tactics are inputs to a greater system.

If someone came up to you, you’re a Starcraft player and said, “What tactic should I use?” You would have a million questions, “Well what race do you play? Who are you playing against? Who’s your opponent? What does he like to do? What race is he playing? Is it two vs. two or is it three vs. three?” There’s so many different questions. So if someone comes up to me and says, “What tactics? What marketing channels should I use for my business?” You can’t answer it. The answer is not in the tactics.

So this is what Sean Ellis, this is how he defines growth hacking. He says, “Growth hacking is experiment driven marketing.” You walk into most marketing departments, and they’ve done a budget, and they sit in a room, and they decide how to divvy up that money across different channels. “Okay, we’ll buy some display ads. We’ll do some Google Ad Words. We’ll invest in analyst relations,” but they’re doing it blind. Year after year, they’re not looking at the results, looking at the data, and they’re not running experiments. So it’s really kind of blind. So this is really the difference.

I took it one step further. I said growth hacking is experiment-driven marketing executed by people who don’t need permission or help to get things done, because I think growth hacking’s a lot about the process. And it’s about culture, and embracing the idea of doing a whole bunch of marketing experiments week over week. But if you have a team that is only idea-driven, and tactic driven, and then they have to farm out all of the production to multiple other stakeholders in the business like teams of Devs or Designers, then you’re not able to iterate. So to simplify it I just said, “Growth hacking equals people, people who have the requisite skills to get things done from start to finish, and process.”


So let’s talk about people. You don’t just wake up in the morning and just say like, “Let’s do some marketing.” You have to know what your goals are and then break it down into little pieces, and then attack based on that. So this is a system, that was devised by Brian Balfour at HubSpot. I call it the Balfour method. A good way to measure a person, when you’re hiring to be a growth hacker and run growth experiments, is to show them this chart and say, “Well how far around the wheel can you get before you need to put something on somebody else’s to-do list?” Now granted you’re not going to always be able to hire people that can do everything. I’ve seen it work where people can do bits and pieces, but it sure is nice to have people who can do design and development on a growth team.


So before you begin implementing a process at your company, what you want to do is establish a method for testing. And then you need analytics and reporting. I’ve seen a lot of companies really miss the boat with their analytics. They’ve got it too fragmented across multiple systems. The analytics for their website is too far detached from the analytics that’s within their products. Because you don’t want to stop at the front-facing marketing site. It’s great to run A/B tests, and experiment on your home page, and try and get more people to click through to your product page and your sign-up page, but then also there are these deep product levers that you can experiment with your onboarding process, and your activation and your referral flow.

So what you’re really looking for, and the reason why you establish a system and a method, is number one to establish a rhythm. So at my company we were in a funk where we were just running an A/B test every now and then when we had spare time. It’s really one of the most high-value things we could be doing, yet we were neglecting to do it. We were working on other projects. The biggest thing we did was we had implemented this process which it forces us to meet every Monday morning and discuss, layout our experiments, really define what our goals are, and establish that rhythm.

Number two is learning, and that basically is that all the results of your experiments should be cataloged so that you can feed them back into the loop. So if you learned a certain thing about putting maybe a customer testimonial on a sign-up page, and it increases your conversion by 2%, maybe you take a testimonial and put it somewhere else where where it might have the same sort of impact. So you take those learnings and you reincorporate them, or you double down.

Autonomy, that goes back to teams. You really want your growth team to be able to autonomously make changes and run their experiments without a lot of overhead. And then accountability, you’re not going to succeed the majority of the time. In fact you’re going to fail most of the time with these experiments. But the important thing is that you keep learning and you’re looking at your batting average and you’re improving things.

Setting Goals

So Brian’s system, it has a macro level and a micro level. You set three levels of goals. One that you’re most likely to hit. So 90% of the time you’ll hit it, another goal at which you’ll hit probably 50% of the time, and then a really reach goal which you’ll hit about 10% of the time. And then an example would be let’s improve our activation rate by X%. This is our stated goal. Now for 30 to 60 days let’s go heads down and run experiments until the 60 days is up, and we’ll look and see if we hit our OKRs with checkpoints along the way. So now you zoom in and you experiment. So this is the week-by-week basis. So every week you’re going through this cycle.

Experimentation Cycle

So there’s really four key documents as part of this experimentation cycle. The first is the backlog. That’s where you catalog. That’s where you catalog all your different ideas. Then you have a pipeline which tells you what you’re going to run next, as well as what you’ve run in the past. So that somebody new on the team can come in and take a look and see what you’ve done to get where you are today. Then is your experiment doc which serves sort of as a specification.

So when you’re getting ready to do a big test, like let’s say you’re going to re-engineer your referral flow, you’re going to outline all the different variations. You’re going to estimate your probability for success, and how you’re going to move that metric. It’s a lot like software development as you’re estimating how long somethings going to take, and you’re also estimating the impacts. And then there’s you’re play-books, good for people to refer to.

So with Trello it actually works out really well. So the brainstorm column here, the list here, is basically where anybody on the team can just dump links to different ideas, or write a card up saying, “Oh, we should try this.” It’s just totally off the cuff, just clear out whatever ideas are in your head and you dump them there. So you can discuss them during your meeting where you decide which experiments are coming up this week.

The idea is that you actually want to do go into the backlog. The pipeline are the ones that I’m actually going to do soon, and I’ll make a card, and I’ll put it in the pipeline. And then when I’m ready to design the experiment, I move it into the design phase, and then I create the experiment doc. And then I set my hypothesis, “I’m going to do this. I think it’s going to have this impact. Here are the different pages on the site I’m going to change, or things within the product I’m going to change.” And then later in the doc, it has all of the learnings and the results.

So one key tip that Brian talks about is when you’re trying to improve a certain metric, rather than saying, “Okay, how can we improve conversation rate?” You think about the different steps in the process. It just sort of helps you break the problem in multiple chunks, and then you start thinking a little bit more appropriately. And this is actually where the tactics come into play when you’re brainstorming, because this is where you’d want to actually look to others for inspiration. If you’re interested in improving your referral flow, maybe use a couple of different products, or think about the products that you use where you really thought their referral flow worked well, and then you use that as inspiration to impact yours. You don’t take it as prescription. You don’t try and like apply it one-to-one, but you think about how it worked with their audience and then you try to transfer it over to how it work with yours.

Prioritization, there’s really three factors here. You want to look a the impact, potential impact. You don’t necessarily know but you want to sort of gauge the potential impact should you succeed with this experiment. The probability for success, and this can be based on previous experiments that were very close to this one. So like I mentioned earlier, the customer testimonial you had a certain level of success with that in one part of your product on your website, and you’re going to just reapply it elsewhere. You can probably set the probability to high, because you’ve seen it in action with your company with the product before.

But if you’re venturing into a new space, let’s say like Facebook ads. You never run them for your product before. You don’t know what parameters to target. You don’t know anything about how the system works and the dayparting and that, then you probably want to set the probability to low. And then obviously the resources. Do I need a marketer? Do I need a designer, a developer and how many hours of their time?

So once you move something into the pipeline, I like to have my card look like this. I have my category, my label. So this is something with activation, trying to increase our activation rate. And then I say, “It’s successful. This variable will increase by this amount because of these assumptions.” Then you talk with your team about these assumptions, and try and explain why. So the experiment doc, I had mentioned before, this is sort of like your spec. I like doing this rather than implement the real thing upfront, if you can get away just putting up a landing page, and then worrying about the behind the scenes process later, do that. Like if you’re thinking about changing your pricing. Maybe change the pricing on the pricing page, and then not do all the accounting, billing code, modifications just yet.

Implement, there’s really not much to say about that. The second to last step is to analyze. So you want to check yourself as far as that impact. Did you hit your target? Was it more successful than you thought, less successful? And then most importantly, why? So really understanding why the experiment worked. Was it because you did something that was specifically keyed in on one of the emotions that your audience has? And then maybe you carry that through to later experiments.

And then systemize, another good example of systemizing that actually comes from HubSpot is the idea of an inbound marketing assessment. It’s actually their number one Lead Gen channel which is they just offer, for any company that wants, they’ll sit down one-on-one and do a full assessment of their website, of their marketing program, et cetera. When they were doing these one-on-one discussions those became their best leads most likely to convert.

So they made something called Website Grader which is something you could find online, and it’s sort of like the top of the funnel for that marketing assessment where someone’s like, “Ah, I don’t know if my website’s good at SEO, and am I getting a lot of links?” like that. So they’ll plug it into the marketing grader. It’ll go through and give them a grade. They’ll get a nice report, and then a sales rep in the territory that that person lives will now have a perfect lead-in to an inbound marketing assessment. Which they know is a high-converting activity should someone actually get on the phone with their consultant. So it’s a good example of productising.

How It Works In Practice

So this is just sort of like how the system works. So Monday morning we have … It’s our only meeting. It’s about an hour and a half, and we go through what we learned last week. We look at our goals, make sure we’re on track for our OKR, which is our Objective and Key Result. And then we look at what experiments we’re going to run this week, and then the rest of the week is all going thought that rapid iteration of that cycle of brainstorming, implementing, testing, analyzing, et cetera.

So you kind of go through these periods of 30 to 90 days of pure heads-down focus, and then afterwards you zoom out, and you say, “How good am I at predicting success of these experiments? Are we predicting that we’re going to make a big impact, or making small impacts? Is our resource allocation predictions accurate?” And then you want to always be improving on throughput. So if you were able to run 50 experiments during a 90-day period, your next 90 days you want to be able to run 55 or 60. So you always want to be improving.

by Gareth Wilson at February 27, 2015 11:33 AM

February 26, 2015

Fog Creek

4 Steps to Onboarding a New Support Team Hire

newSupportHire (1)
At Fog Creek, every now and then the Support team has the pleasure of onboarding a new member. We know what most of you are thinking “Wait! Did they just say ‘pleasure’?” Yes, yes we did. Team onboarding does not have to be an irksome obstacle in your day-to-day work – it’s a key milestone for your new hire’s long-term success and the process should be repeatable and reusable.

If you’ve ever been in support, you know there can be a lot of cached knowledge representing the status quo, and there is usually, and sometimes exclusively, a “show and tell” style of training. This is the fire drill of knowledge transfer, and it’s an arduous process for all concerned. Not only does it take longer for the new hire to get up to speed, but you also have at least one person no longer helping your customers. For anyone who has ever worked in a queue, we’re sure you can agree that when someone steps out, the team feels the impact immediately.

Our Support team mitigates this by using a well-documented onboarding process. And we do it without a giant paperweight… err training manual. Similar to how we onboard new hires to the company, we also leverage Trello to onboard our new Support team hires. The items on the board are organized and the new hire just works down the list.

The board separates out individual items and team-oriented items. This keeps the new person accountable for their tasks, and it keeps the team involved so that they don’t accidentally abandon them.

support team onboarding new hire trello board

1. Read Up on the Essentials

The first item on the board is titled “What to read on your first day”. This card links to a wiki page that talks about the things the new person needs to know before they can do any real work.

Next, is the “Support Glossary”. This is essential as they’re going to hear words, phrases, and acronyms galore. So scanning through this card helps them start to get a feel for the “lingo”.

With this done, it’s time to join the company chat and get a few nice “hellos” and introductions from other folks in the company. Primarily, this stage helps them to start assimilating the knowledge they’ll need to be successful in the role.

The assimilation process starts with briefly describing the Support team’s role and responsibilities within the organization. This covers our two main workflows: interrupts and queue-based. Then we move on to our guiding customer service principles.

After reading several more cards, which each link off to wiki pages, the new person moves them over to the ever-so-rewarding “Done” column. Starting to feel accomplished, they can start to get their hands dirty.

You may be wondering “couldn’t they just have one card and link to a wiki page with a list of articles?” Sure. But, that process tends to be more of a rabbit hole, and we want our Support team hires to have just the right amount of information in phases, and not dumped on them all at once.

2. Dogfood Until it Hurts

supportTeam2 (1)
After reading for what probably feels like weeks (not really, a day maybe), the new person starts using our products. Since we dogfood our own products, this is a great way to discover and learn about them. They can later use this experience to relate to, and help new customers. They create production accounts, staging accounts, and start a series of configurations. This helps them get into the Support workflow.

3. Go Under the Hood

Configuring web application sites isn’t all that hard, so we up the challenge. The new hire starts creating any necessary virtual machines (VM). Each one is identified on separate cards on the board, naturally. These VMs aid the new Support member in troubleshooting customer environments by replicating them as best as they can.

Since Kiln and FogBugz sit on top of databases, the new person also starts to configure those systems and get familiar with the database schemas. This helps build an understanding of our products’ foundations.

Once they have what we call the “basics”, they can start tricking out their dev machine. This card links to another board with all the juicy details maintained by all devs in the company.

4. Get Immersed in the Workflow

supportTeam (1)
There are a several more cards which discuss process and procedures. These include when to use external resources, where they are located, and how to use them.

A key part of Support is a robust workflow. The team helps the new person get immersed into the workflow by adding them to scripts, giving the repository permissions in Kiln, adding them to recurring team calendar events, and so on. Most importantly, they start to see how the Support team shares knowledge and work on some real customer cases where they will be helping our customers be amazingly happy!

We’ve found that using a lightweight, but clearly defined process, to onboard a new hire to our Support team is key to their efficiency and long-term success. It helps the new hire become self-sufficient, as well as know where they can go for help as they gain experience.

by Derrick Miller at February 26, 2015 11:22 AM

February 25, 2015

Giles Bowkett

Agile Is Overripe

Haters Welcome

I wrote a blog post criticizing Scrum, and a bunch of people read it. A lot of people seemed to be talking about it too. I started regularly seeing 50+ notifications when I signed into Twitter, which was a lot for me.

There weren't a lot of people defending Scrum. Most of the tweets looked like this:

Of the tweets which defended Scrum, they mostly looked like this example of the No True Scotsman fallacy:

I've seen this from people who are old enough to know better, including one Agile Manifesto co-author, so it's entirely possible there's a little war afoot in the world of Scrum, over how exactly to define the term. Sorry, Scrum hipsters, but if there is indeed such a war, you either are losing it, or (more probably) you already lost it, years ago. I'm going to use the term as it's commonly understood; if you have an issue with the default understanding of the term, I recommend you take it up with Google, Wikipedia,,, and so on and so forth. I don't care enough to differentiate between Scrum Lite and Scrum Classic, because they both taste like battery acid to me.

However, I did get one person - literally only one person - telling me that Scrum actually works, and that includes planning poker:

(As it happens, it's someone I know personally, and respect. Everyone should watch his 2009 CUSEC presentation, because it's deep and brilliant.)

Another critic ultimately led me to this blog post by Martin Fowler, written in 2006:

Drifting around the web I've heard a few comments about agile methods being imposed on a development team by upper management. Imposing a process on a team is completely opposed to the principles of agile software, and has been since its inception...

a team should choose its own process - one that suits the people and context in which they work. Imposing an agile process from the outside strips the team of the self-determination which is at the heart of agile thinking.

I'm hoping to find out more, later, about what it's like when you're on a Scrum team and it actually works. To be fair, not every Scrum experience I've had has been a nightmare of dysfunction; I just think the successes owe more to the teams involved than to the process. And regarding Fowler's blog post, a lot of the people who endorsed my post seemed to do so angrily. So I would guess that many, many of these "fuck yeah" tweets came from people who had Scrum imposed on them, rather than choosing it. And therefore I think both of these areas of criticism are worth listening to.

However, of all the criticisms of my blog post that I saw, literally every single one overlooked what is, in my opinion, my most important criticism of Scrum: that its worst aspects stem from flaws in the Agile Manifesto itself.

Quoting the original post:

I don't think highly of Scrum, but the problem here goes deeper. The Agile Manifesto is flawed too. Consider this core principle of Agile development: "business people and developers must work together."

Why are we supposed to think developers are not business people?


The Agile Manifesto might also be to blame for the Scrum standup. It states that "the most efficient and effective method of conveying information to and within a development team is face-to-face conversation." In fairness to the manifesto's authors, it was written in 2001, and at that time git log did not yet exist. However, in light of today's toolset for distributed collaboration, it's another completely implausible assertion...

In addition to defying logic and available evidence, both these Agile Manifesto principles encourage a kind of babysitting mentality.

Sorry, Agile, I am in fact both a business person, and a developer, at the same time. Since my business involves computers, being competent to use them only supercharges my business mojo. This is how I achieve the state of MAXIMUM OVERBUSINESS.

More seriously, I recently started a new job at a company called Panda Strike; our CEO convinced me that the real value in the Agile Manifesto was that it facilitated a change in business culture which was actually inevitable due to a technological shift which happened first.

Moore's Law Created Agile

Agile development replaced waterfall development, an era of big design up front. In waterfall development, you gather requirements, write a spec, get approval on the spec, build your software to match that spec, then throw it over a wall to QA, and only show it to your users once you're done. It's important to realize that big design up front powered a ton of incredible success stories, including putting astronauts on the moon, plus nearly everything in software before the late 80s or early 90s, with the possible exception of the Lisp machine.

I don't want to bring back that era, but to be fair, we lost some things in this paradigm shift. And I think it's pretty easy to imagine how rapid prototyping, iterative development, and YAGNI might all be inappropriate for putting astronauts on the moon. That kind of project wouldn't fit a "design as you go" mentality. It would look like something out of The Muppet Show, except people would die.

In the very early days of computing, you'd spend a lot of time working out your algorithm before turning it into a stack of punch cards, because you wouldn't get a lot of chances to run your code; any error was very expensive.

Big design up front made an enormous amount of sense when the machinery of computing was itself enormous also. But that machinery isn't enormous any more, and hasn't been enormous for a long time. According to someone who's done the math:

a tweaked Motorola Droid is capable of scoring 52 Mflop/s which is over 15 times faster than the 1979 Cray 1 CPU. Put another way, if you transported that mobile phone back to 1987 then it would be on par with the processors in one of the fastest computers in the world of the time, the ETA 10-E, and [those] had to be cooled by liquid nitrogen.

Like all benchmarks, however, you need to take this one with a pinch of salt... the underlying processors of our mobile phones are probably faster than these Java based tests imply.

In between the day of the Cray supercomputer and the modern landscape of mobile phones which can run synthesizers good enough for high-profile album releases and live performances, there was the dawn of the personal computer. As the technology got smaller, faster, and cheaper, Moore's Law rendered a whole lot of management practices obsolete. Development cycles of two entire years were common at the time, but new teams using new technology could churn out solutions in months rather than years, and PowerBuilder developers launched a revolution underneath COBOL devs, starting around 1991, in the same way Rails developers later dethroned Java, starting around 2005, after it became possible to build simple web apps in minutes, rather than months.

In our lifetimes, it may become possible for software-generating software to churn out new apps in seconds, rather than minutes, and if/when that occurs, the culture of the tech industry (which, by then, may be equal to the set of all industries) will need to change again. It's hard to see that far with accuracy, but as far I know, there are basically just two ways a business culture can transform: evolution and persuasion. Evolution is where every business which ignores the new reality just fucking dies.

Persuasion is where you come up with a way to sell a new idea to your boss. This is pretty much what the Agile Manifesto was for. In the early days of Agile, the idea that your boss would force it on you was a contradiction in terms. Either you forced it on your boss, or it just didn't happen at all.

Obviously, times have changed. Quoting Dave Thomas, one of the Agile Manifesto's original authors:

The word "agile" has been subverted to the point where it is effectively meaningless, and what passes for an agile community seems to be largely an arena for consultants and vendors to hawk services and products.

So I think it is time to retire the word "Agile."

Epic Tangent: Ontology Is Overrated

One of the best tech talks I've ever heard, "Ontology Is Overrated" by Clay Shirky, covers a related topic. It's ancient in web terms, hailing from all the way back in 2005, when Flickr and del.ici.ous were discovering the incredible power of tagging, something we now take for granted. The talk includes an interpretation of why Google crushed Yahoo, during the early days of Web search engines. A sea change in technology brought with it a philosophical sea change, which Yahoo ignored - even going so far as to re-establish obsolete limitations - and which Google exploited.

I'll summarize the talk, since text versions don't appear to be online any more. You can still read a summary, however, or download the original audio, which I definitely recommend. It's a talk which stuck with me for almost ten years, and I've heard and given many other talks during that time.

When you look at the Dewey decimal system, which librarians use for storing books on shelves, it looks like a top-down map of all ideas. But it fails very badly as a map of all ideas. Its late 19th-century roots often become visible.

Consider how the Dewey decimal system categorizes books on religion, in 2014:
  • 200 Religion
  • 210 Natural theology
  • 220 Bible
  • 230 Christian theology
  • 240 Christian moral & devotional theology
  • 250 Christian orders & local church
  • 260 Christian social theology
  • 270 Christian church history
  • 280 Christian denominations & sects
  • 290 Other & comparative religions
Asian religions get the number 299, and they have to share it with every tribal and/or indigeneous religion in Australia, Africa, and the Americas, as well as Satanism, Discordianism, Rastafarianism, and Pastafarianism. Buddhism, however, shares the number 294 with every other religion which originated in India. So at best that's a number and a half, out of 100 available, for Asian religion, and all associated topics. Asia contains about 60% of the world's population.

As a map of all the ideas about religion, this is horribly distorted, but it's not actually a map of ideas about religion. It's really just a list of categories of physical books in the collections of American libraries.

Before Google existed, Yahoo first arose as a collection of links, and soon grew large enough to be unwieldy - at which point, Yahoo hired an ontologist and categorized its links into 14 top-level categories, creating in effect a Dewey decimal system for the web. But Yahoo innovated a little, bringing in an element of Unix. If you clicked on the top-level category "Entertainment," you'd get a "Books@" link, where the little "@" suffix served to indicate a symlink. Clicking that would land you in "Books and Literature," a subcategory of "Arts," because according to Yahoo, "Books" were not really a subcategory of "Entertainment."

Librarians use a similar workaround in their systems, namely the fractional decimals which indicate subcategories, so you can say (for example) that a book is about Asia, and about religion. These workarounds are inevitable, because (for example) books can be both literature and entertainment. Or, to be more general, categories are social fictions, and to put a book about Asian religion in the Asia category, rather than the religion category, is to say that its Asian-ness is more important than its religion-ness. The hierarchical nature of ontology means it always imposes the priorities of whichever authority or authorities created the hierarchy in the first place. But with a library, you have an excuse, because a physical book can only be in one place at a time. With web links, there's no excuse.

So, rather than applying this legacy physical-shelf-locating paradigm to a set of web pages, Google allowed you to simply search the entire web. You could never expect librarians to pre-construct a subcategory called "books which contain the words 'Minnesota' and 'obstreperous,'" but Google users in 2005 could work with exactly that subcategory any time they wanted. Flickr and took these ideas much further, creating ad hoc quasi-ontologies by allowing users to tag things however they wanted, and then aggregating these tags, and deriving insight from them.

(Today, unfortunately, you might not get results containing both "Minnesota" and "obstreperous" if you searched Google for those words. Google's lost a tremendous amount of signal through its use of latent semantic indexing to detect synonyms, and to other, similar compromises. This diminishes the Google praise factor in Shirky's talk, but doesn't harm his overall argument in any important way. What does suggest a possible need for revision is the emergence of filter bubbles, where companies try to pre-emptively derive user-generated categories, and then confine you to them, based on what category of user they estimate you to be. Filter bubbles thus impose a new kind of crowd-sourced ontology, which holds serious dangers for democracy.)

Anyway, although this was a fantastic talk, the main point I want to make is that Google defeated Yahoo here by recognizing the whole concept of ontology for the unnecessary, inessential historical relic that it was. Google even briefly used the DMOZ project, an open-source categorization of everything on the web - yes, this actually existed, and it started life with the name Gnuhoo, because of course it did - but dumped DMOZ because nobody even used it when they could just search instead. Ontology is overrated, and Yahoo's failure to recognize that cost them an enormous market.

The Agile Manifesto existed because developers and consultants had begun to recognize that many ideas in tech management were unnecessary, inessential historical relics. Although it opposed these ideas, it didn't even argue that they should be thrown out entirely, just that they were overrated.

Remember, waterfall development reigned supreme. The Agile Manifesto did a great thing in improving working conditions for a lot of programmers, and in achieving new success stories that would have been impossible under the old paradigm. But I can't praise the Agile Manifesto for tearing down the status quo without also acknowledging that over time, it has become the new status quo, and we will probably have to tear it down too.

Synchrony Is The New Ontology

The most obvious flaw in the Agile Manifesto is the claim that face-to-face conversation is the best way for developers to communicate. It's just not true. There's a reason we write code onto screens, rather than transmitting it as oral history like an epic poem from before the invention of the written word. Face-to-face communication has a lot of virtues, and there are certainly times when it's necessary, but it's not designed to facilitate extremely detailed changes in extremely large code bases, and tools which are designed for that purpose are often superior for the task.

Likewise, I don't want to valorize a tired and harmful stereotype here, but there's a lot of development work where you can go days without needing to talk to anyone else for more than a few moments.

In many industries, companies just do not need to have synchrony or co-location any longer. This is an incredible development which will change the world forever. Do not expect the world of work to look the same in 20 years. It will not.

It's not just programming. Overpriced gourmet taco restaurants no longer need locations.

In 2001, when the Agile Manifesto was written, Linux was already a massive success story for remote work and asynchronous development. But it was just one such story, and somewhat anomalous. In 2014, nobody on the web is building a business without open source. Because of that fact, and because of the fact that just about every open source project runs on remote work and asynchronous development, we can also say that there are very, very few new technology companies today which do not already depend on the effectiveness of remote work and asynchronous dev, because these businesses would fall apart without their open source foundations, and those foundations were built with remote work and async dev.

The bizarre thing about most companies in this category, however, is that although they absolutely depend on the success of remote work and async dev, and although they absolutely and literally could not exist without the effectiveness of remote work and async dev, they nonetheless require their employees to all work in the same place at the same time.

Consider that GitHub's a distributed company where a lot of people work remote. Consider also that a lot of startups run development entirely through GitHub. This means a lot of CTOs will happily bet their companies on libraries and frameworks developed remotely, and a product which was developed remotely, yet they don't do remote dev when it comes to running their own companies.

Yahoo put ontology onto its web links simply because it never questioned the common assumption that if you want to navigate a collection of information, you do so by organizing that information into a hierarchy.

Why do tech companies have offices?

In this case, the Agile Manifesto just went stale. It's just a question of the passage of time. The apps, utilities, and devices we have for remote collaboration today are straight-up Star Trek shit by 2001 standards.

In 2001, when the Manifesto was written, you could argue against Linux as a model for development in general. Subversion was still new. Java (developed at a corporation, inside office buildings) was arguably superior to Perl, which was probably the best open source alternative at the time. There weren't profitable, successful companies built this way. You could call Linux a fluke. But we have profitable, successful, remote-oriented companies today, and legions of successful open source projects have validated the model as well.

A software development process that doesn't acknowledge this technological reality is just silly.

Two-year development cycles and big design up front were to 1990s programming as ontology was to 1990s web directories. They were ideas that had to die and Agile was right to clear them away. But that's what synchrony and co-location are today, and the Agile Manifesto advocates in favor of both.

And this synchrony thing isn't the only problem in the Agile Manifesto. I may blog in future about the other, deeper problems in the Manifesto; I already covered the "businesspeople vs. developers" problem in the Scrum post.

by Giles Bowkett ( at February 25, 2015 07:42 PM

Dave Winer

Comments on the Node Foundation

Eran Hammer posted a long piece yesterday about why he does not support a Node Foundation.

I am a relative newcomer to Node, having started developing in it a little over a year ago. I've shipped a number of products in Node. All my new server software is running in Node, most of it on Heroku. I love Node. Even though it's a pain in the ass in some ways, I've come to adore the pain, the problems are like crossword puzzles. I feel a real sense of accomplishment when I figure it out.

The server component of my liveblog is running in Node, for example.

I am new to Node but I also have a lot of experience with the dynamics Hammer is talking about, in my work with RSS, XML-RPC and SOAP. What he says is right. When you get big companies in the loop, the motives change from what they were when it was just a bunch of ambitious engineers trying to build an open underpinning for the software they're working on. All of a sudden their strategies start determining which way the standard goes. That often means obfuscating simple technology, because if it's really simple, they won't be able to sell expensive consulting contracts. He was right to single out IBM. That's their main business. RSS hurt their publishing business because it turned something incomprehensible into something trivial to understand. Who needs to pay $500K per year for a consulting contract to advise them on such transparent technology? They lost business.

IBM, Sun and Microsoft, through the W3C, made SOAP utterly incomprehensible. Why? I assume because they wanted to be able to claim standards-compliance without having to deal with all that messy interop.

As I see it Node was born out of a very simple idea. Here's this great JavaScript interpreter. Wouldn't it be great to write server apps in it, in addition to code that runs in the browser? After that, a few libraries came along, that factored out things everyone had to do, almost like device drivers in a way. The filesystem, sending and receiving HTTP requests. Parsing various standard content types. Somehow there didn't end up being eight different versions of the core functionality. That's where the greatness of Node comes from. We may look back on this having been the golden age of Node.

There are reasons why, once a technology becomes popular, it's very hard to add new functionality. All the newcomers want to make a name for themselves by authoring one of the standard packages. Everyone has an idea how it should be done, and won't compromise. So what happens is very predictable, and NOT BAD. The environment stops growing. I saw that happening in RSS, as all the fighting over which way to rip up the pavement and start over took over the mail lists. So when RSS 2.0 came out I froze it. No more innovation. That's it. It's finished. If you want to do new stuff, start a module (very much like the NPM packages of Node). Luckily, at that moment, I had the power to do that, as Joyent did with Node, when embarking on this ill-advised foundation track.

Now there are many things about Node culture that I don't understand, being a newbie, as I am. But based on what I know about technology evolution from other contexts, the lack of motion at Joyent wasn't a problem, it was realistic. It was what I, as a removed-from-the-fray developer want. I want this platform to stay what it is. I want to rock and roll in my software, not be broken every time someone decides we should hit the ball from one side of the plate, then the other, then back to the original. Nerd debates about technology never end, until someone puts their foot down and says, no more debates, it's done. And the confusion in those debates is always manipulated by the BigCo's who have motives that we'd be happier not really understanding. I know I would be. Nightmarish stuff.

I love Node. I want it to be solid, that's the most important thing to me. I'd love to see a list, in a very simple newbie-friendly language, that explains what it is that Node needs so desperately to justify both the fork, and the establishment of this foundation. Seems to me we might be pining for the good old days of last year before too long. ;-(

PS: Hat-tip to the io.js guys. In the RSS world, the forkers claimed the right to the name. At least you guys had the grace to start with a new name, so as not to cause the kind of confusion the RSS community had to deal with.

February 25, 2015 03:54 PM

Fog Creek

Help Work to Flow – Interview with Sam Laing and Karen Greaves

.little {font-size: 75%}

Looking for audio only? Listen on


We’ve interviewed Sam Laing and Karen Greaves, Agile coaches and trainers at Growing Agile, in Cape Town, South Africa. Together they wrote ‘Help Work to Flow’, a book with more than 30 tips, techniques and games to improve your productivity. We cover how software developers can improve their productivity and manage interruptions, why feedback and visible progress are important to staying motivated and how teams can hold better meetings.

They write about Agile development techniques on their blog.


Content and Timings

  • Introduction (0:00)
  • Achieving Flow (2:06)
  • Importance of Immediate Feedback (3:07)
  • Visible Progress (4:27)
  • Managing Interruptions (5:42)
  • Recommended Resources (8:50)




Today we have Sam Laing and Karen Greaves who are Agile coaches and trainers at Growing Agile based in South Africa. They speak at conferences and write about Agile development. They’ve written 8 books between them, including their latest book, Help Work to Flow, part of the Growing Agile series. Sam and Karen, thank you so much for taking your time to join us today all the way from South Africa. Why don’t you say a bit about yourselves?

We both have worked in software our whole careers and we discovered Agile somewhere along the line about 8 years ago and figured out it was a much better way to work. Then in 2012, so just over 3 years ago, we decided that’s what we wanted to do, was help more people do this and figure out how to build better software. So we formed our own company, Growing Agile, it’s just the 2 of us and we pair work with everything we do, so you’ll always find us together.

Really what we do is just help people do this stuff better and use a lot of common sense. What’s been quite exciting in the last few years is now being business owners of our own small business. We’re getting to try lots of Agile techniques and things for ourselves and it’s lots of fun and a constant journey for us.

I wanted to touch on the book, Help Work to Flow, what made you want to write that?

What actually happened is we joined the meetup called the Cape Marketing meetup and it’s for small business owners to learn about marketing techniques and how to market their small businesses. In doing that we realized that a lot of them are very, very busy and think they need to hire people to help them with their admin. We ran a mini master class with them on Kanban on how to use that as a small business owner. They absolutely loved it. They’re still sending us pictures of their boards and how it’s helping them with their workflow. We are like, well actually we have a lot of these tips that could help other people so let’s just put them together into a book.

Achieving Flow

I want to touch on flow for a bit, it’s often elusive to developers. How can developers help themselves achieve flow?

The first thing that pops in is pairing. To help with flow, having a pair, this is my pair, helps a lot. When someone’s sitting next to you, it’s very difficult to get distracted with your emails and with your phone calls and with something else because this person right here is keeping you on track and keeping you focused.

Another one would be to avoid yak shaving. I’m an ex-developer, I know exactly how this happens. You’re writing a function for something and you need to figure out some method and you go down Google and then you go into some other chat room. Next thing you’re on stack overflow and you’re answering questions that other people have asked that has nothing to do with what you were doing. Again, to have a pair to call you on that yak shaving is first prize but otherwise to recognize when you personally are yak shaving, bonus points.

Importance of Immediate Feedback

Enabling immediate feedback is said to be key to achieving flow. How can this be applied within the context of software development?

What we see lots of software teams do, is they all understand that code reviews are good and if they’re not doing pair programming then they should do code reviews. You even get teams where testers get their test cases reviewed but often what we see is teams leave that kind of to the last minute. We’ve done everything now let’s just quickly do the review.

One of the things we teach teams is instead of thinking of it as a review which is a big formal thing you do at the end, we use something that’s just called show me. I think we got it from Janet Gregory who wrote the book on Agile Testing. Literally as soon as you’re done with a small task, something that took you an hour maybe 2, you grab some else on your team or working with you and you show them for 5 minutes.

Like “Hey, could you just come and see what I did?” They quickly look at it, you get immediate feedback. Does it meet what their expectation was? Are there any obvious issues. The great thing about reviews where you get to explain what you did, sometimes you find the issues yourself. Definitely using that show me idea, versus making reviews a big thing. We’ve seen that have a radical changes in feedback for teams.

Visible Progress

The book highlights the importance of clear goals and visible progress. What do you mean by visible progress and why is it important to flow?

When you physically write down what’s top of mind, it kind of enters your brain so you don’t have to worry about those things anymore because you’ve written them down. If you’ve got, imagine a board in front of you with all these things that have been popping off of your mind, when you move those stickies across to done, you get this sense of achievement.

You automatically sending information out to those around you and to yourself on how much you’ve done versus how much you’ve still got to do. It helps you to get into the flow of getting stuff done. It also helps other people realize how busy you are and whether they should or shouldn’t add more to your workflow.

It’s really interesting, earlier today our personal task board, which is a physical board that sits between us, was like overwhelmed with too much stuff. I was like, “I just feel like we’re not getting anything done.” Sam took all the stickies and she put them on another piece of paper and said, “Okay, here are the 8 we have to do today.” Just looking at a board with 8 things, I said, “Okay, I can do 8 things today.” Before I was like, “I can’t even start anything because there’s so much stuff here.” That really, really helped.

Managing Interruptions

Here at Fog Creek every developer has a private office, to help them sort of reduce interruptions. What other techniques can you recommend to help developers manage interruptions?

Firstly, you’re quite lucky, most of the teams we encounter here are using open-plan offices and personal offices are quite rare but one of the teams we worked with had this great idea and it’s in the book and its called the point of contact for emergencies and it’s pronounced as POCE. What happens with that is they, in the team they agree who’s the person for this week who’s going to be the interrupt-able person and they even put a sign on their doors, going if you’ve got an urgent issue speak to Adrian, Adrian is going to be the POCE this week.

They rotate that within the team and so that persons going to have a lot of interruptions that week but everyone else is a lot less interrupt. The rule is, that person can then talk to team members if they can’t solve the problem but at least then team members are being interrupted by one person who’s part of their team and probably not as often as outside. That’s one idea that we can use.

Another one is, if you do get interrupted, don’t immediately do that interruption. Add it to your task list, add it to the list of work that you think you’re going to do that day and prioritize it accordingly. Often if you get disrupted with, do this and do that and do that, you’ve pushed your whole day out of sync and they’re usually not as important as other things.

We’ve touched on ways that an individual developer can help themselves but how can dev management go about encouraging flow across whole teams or organizations?

One of the biggest areas that we see waste are meetings. Often you have a lot of people involved in a meeting, it’s poorly facilitated or not facilitated at all. Also the actual goal of the meeting, sometimes is just to talk about what’s going to happen in the next meeting, which is atrocious and such a waste of time.

For management to encourage having facilitators, encourage people to get facilitation skills so that these meetings aren’t such a waste of time would already be a huge money and time saver. Also to look at having meeting free days so that people aren’t rushing from one meeting to the next or only have one hour of work between 3 meetings. Rather have one day that you are completely immersed in meetings and not going to do any work and then 2 days of no meetings, where you could focus.

I mean, we came up with that because we found with us being coaches and trainers, we’re often meeting new clients, giving them proposals or whatever. We go around quite a lot and sometimes we do onsite work and then we’re in the office. We just felt like we were getting nothing done. We did it the other way around, we came up with out-of-office days, so on 2 days a week we’ll be out of the office and everything has to fit into one of those 2 days if we’re meeting someone else. Then the other 3 days we had in the office and we actually got a lot of things done. That’s where we identified that and it still works really, really well for us.

Recommended Resources

What are some resources that you can recommend for people wanting to learn more about some of these types of techniques?

Really if you just Google productivity tips, you’ll find a whole bunch. The trick is that they’re not all going to work for you, so try a couple.

Even the ones in our book, not everything is going to work for you and it’s definitely not all going to work at the same time.

Our advice is find tips wherever you can, try them and then keep the ones that work.

Sam and Karen, thank you so much for joining us today.

Thank you.

Thanks very much.

by Gareth Wilson at February 25, 2015 11:52 AM

February 24, 2015

Dave Winer

My 'Narrate Your Work' page

I now finally have a Narrate Your Work public page.

This has been a goal of mine for many years.

I had a worknotes outline a few years back, but it wasn't like this.

It was published. This is just what I type as I type it.

I still have another level of project management in an outline that is not public, can't be, because it contains private information.

But I'm going to move more and more into the liveblog.

Also looking forward to actually liveblogging a live event with it. If I knew/cared more about the Oscars I would have done that. Maybe the NBA playoffs? We'll see.

February 24, 2015 10:33 PM

How to fix the Internet economy

Suppose you watched a movie illegally, because it was convenient to watch it at home, and you feel like a jerk for not paying the $15 for a seat in the theater.

There are lots of reasons not to go to a theater. Poor sound quality. People who bring infants to the theater, or talk about the movie loudly as if we were in their living room.

What if?

What if there was a way to pay for the movie, after-the-fact?

No movie chain is going to do this, so why not start a proxy for them?

  1. You'd log into a central site, if you're new, enter your credit card info

  2. Navigate to the page for the movie you just watched.

  3. Click the box, and click OK, and you've paid the fee.

I don't think it should be a voluntary amount. It should the the price of a ticket, either the average price, or the price at a theater local to you (think about this). This isn't charity, it's about convenience. You're not trying to avoid paying for the movie-watching experience, or negotiate a better price.

Where does the money go?

This is where it gets interesting.

At first, at least, no movie company is going to want this to exist. They might try to sue it out of existence. In the meantime, the money goes into escrow accounts, earning interest, to be paid to the rights-holder, when they demand it. Who is the rights holder? That's probably a large part of what will be litigated.

It would be like the lottery, the jackpot would keep growing, and the pressure would build (think about shareholders) to just take the money. And if that were to happen, we would have a whole new way of doing content on the Internet, for pay.


It's a way to bootstrap a new economy, one that will be useful in other contexts.

For example, I was just reminded that I could pay the New Yorker $1 per month to read all their articles. I would totally do this. If I didn't have to create an account, and give them all my info, and be subject to all the marketing these guys do. The price isn't just $1 a month. That's just what you pay so they can begin to aggressively upsell you. But I'd like to give them the money.

We need a middle-man here, some entity that doesn't belong to the vendors or the users. It cares equally for both. This would ultimately be good for the vendors, because the system is very inefficient.

February 24, 2015 09:30 PM

Question about JavaScript and XML

Update: I came up with a different solution, so this is no longer a priority for me.

I want to write a routine that emojifies an OPML file.

It takes OPML text as input, parses it, walks the whole structure, and emojifies the text, and then replaces the original text with the new text.

At the end of the traversal, it re-serializes the XML text, and returns it.

However, I get an error at the crucial point, the call to serializeToString.

Uncaught TypeError: Failed to execute 'serializeToString' on 'XMLSerializer': Invalid node value.

February 24, 2015 07:57 PM

John Udell

GitHub for the rest of us

There's a reason why software developers live at the leading edges of an unevenly distributed future: Their work products have always been digital artifacts, and since the dawn of networks, their work processes have been connected.

The tools that enable software developers to work and the cultures that surround the use of those tools tend to find their way into the mainstream. It seems obvious, in retrospect, that email and instant messaging -- both used by developers before anybody else -- would have reached the masses. Those modes of communication were relevant to everyone.

[ Also on InfoWorld: Git smart! 20 essential tips for Git and GitHub users | Navigate the rapid advances in programming with hot trends and cold spells for coders and 15 technologies that are changing how developers work. | Keep up with hot topics in app dev with InfoWorld's Application Development newsletter. ]

It's less obvious that Git, the tool invented to coordinate the development of the Linux kernel, and GitHub, the tool-based culture that surrounds it, will be as widely relevant. Most people don't sling code for a living. But as the work products and processes of every profession are increasingly digitized, many of us will gravitate to tools designed to coordinate our work on shared digital artifacts. That's why Git and GitHub are finding their way into workflows that produce artifacts other than, or in addition to, code.

To read this article in full or to leave a comment, please click here

by Jon Udell at February 24, 2015 11:00 AM

February 23, 2015

Tim Ferriss


The Tim Ferriss Show - Glitch Mob

(Photo: Ralph Arvesen)

Justin Boreta is a founding member of The Glitch Mob. Their music has been featured in movies like Sin City II, Edge of Tomorrow, Captain America, and Spiderman.

In this post, we discuss The Glitch Mob’s path from unknown band to playing sold-out 90,000-person (!) arenas.  We delve into war stories, and go deep into creative process, including never-before-heard “drafts” of blockbuster tracks!  Even if you have zero interest in music, Justin discusses habits and strategies that can be applied to nearly anything.  Meditation?  Morning routines?  We cover it all.

TF-ItunesButton TF-StitcherButton

The Glitch Mob’s last album, Love Death Immortality, debuted on the Billboard charts at #1 Electronic Album, #1 Indie Label, and #4 Overall Digital Album. This is particularly impressive because The Glitch Mob is an artist-owned group.  It’s a true self-made start-up.

This podcast is brought to you by Mizzen + Main. Mizzen + Main makes the only “dress” shirts I now travel with — fancy enough for important dinners but made from athletic, sweat-wicking material. No more ironing, no more steaming, no more hassle. Click here for the exact shirts I wear most often. Order one of their dress shirts this week and get a Henley shirt (around $60 retail) for free.  Just add the two you like here to the cart, then use code “TIM” at checkout.

This episode is also brought to you by 99Designs, the world’s largest marketplace of graphic designers. Did you know I used 99Designs to rapid prototype the cover for The 4-Hour Body? Here are some of the impressive results.

QUESTION(S) OF THE DAY: What music do you listen to when you work? When you really need to get in the zone? Please share in the comments.

Do you enjoy this podcast? If so, could you please leave a short review here? I read them, and they keep me going.

Scroll below for links and show notes…

Selected Links from the Episode

Website | FacebookTwitter | Instagram | YouTube

Learn More about The Glitch Mob




Commercial Work

Show Notes (Time Stamps Approximate)

  • World-class attributes of Justin Boreta
  • The Grant Korgan story
  • Unique attributes of The Glitch Mob and the feeling of being on stage in front of 90,000+ people
  • Defining “indie” and “artist owned”
  • The makeup and evolution of The Glitch Mob team
  • Tools and software of The Glitch Mob
  • What exactly is “mastering”?
  • Deconstructing audio engineering software and Ableton
  • How to have your music featured in massive motion pictures
  • The story of the Sin City II trailer
  • Justin plays Animus Vox [approx 36:30]
  • The fourth member, Kevin, and his role in the success of the business
  • Developing the creative process as success comes into play
  • Soliciting feedback, Justin Boreta-style
  • Describing a day in the studio for The Glitch Mob
  • Commonalities of the most successful songs
  • The importance of traditional instrument skills when performing/producing music
  • Justin plays the never before heard 6th version of Our Demons, followed by the finished product [57:30]
  • A rapid learning program for music production
  • The draft version of Fortune Days, followed by the finished product [1:03:15]
  • How many separate tracks are running in a Glitch Mob song?
  • What percentage of samples are custom vs. off-the-shelf?
  • Current revenue streams for The Glitch Mob
  • Favorite pastry, pre-show meditation, defining success, and advice for his 20-year old self
  • What EDM show should the uninitiated go to first, morning rituals, meditation and morning workouts
  • What is the best piece of advice you’ve ever received? [1:40:20]
  • Justin plays us out with Can’t Kill Us [1:48:45]

People Mentioned

by Ian Robinson at February 23, 2015 10:16 PM

John Udell

Literate programming is now a team sport

In the mid-1980s I worked for a company that squandered a goodly number of tax dollars on a software project for the Army. Do horrors spring to mind when you hear the phrase "milspec software"? It was even worse. My gig was milspec software documentation.

We produced some architectural docs, but our output was dwarfed by piles of that era's precursor to Javadoc: boilerplate extracted from source code. During one meeting, the head of the doc team thudded a two-foot-thick mound of the stuff onto the table and said: "This means nothing to anyone."

[ The art of programming is changing rapidly. We help you navigate what's hot in programming and what's going cold and give insights into the technologies that are changing how developers work. | Keep up with hot topics in app dev with InfoWorld's Application Development newsletter. ]

Meanwhile, we were following the adventures of Donald Knuth, who was developing an idea he called literate programming. A program, he said, was a story told in two languages: code and prose. You needed to be able to write both at the same time, and in a way that elevated the prose to equal status with the code. Ideas were explored in prose narration, at varying levels of abstraction, then gradually fleshed out in code. 

To read this article in full or to leave a comment, please click here

by Jon Udell at February 23, 2015 11:00 AM

February 22, 2015

Giles Bowkett

Rest In Peace, Carlo Flores

This is my third "rest in peace" blog post since Thanksgiving, and in a sense, it's the saddest.

My friend Carlo Flores has died. He lived in Los Angeles, and on Twitter, every hacker in Los Angeles was going nuts with grief a few days ago. None of them would say what happened. I had the terrible suspicion Carlo killed himself, and I eventually found out I was right:

I can't even imagine the pain he must have been in to seek a way out, knowing that it would make all of us miss him this way.

I love you Carlo. I'm going to rock for you today.

by Giles Bowkett ( at February 22, 2015 03:49 PM

Dave Winer

Good fences make good neighbors

This is really a very simple idea, but a hard one for a lot of people to grasp, probably due to the way our culture mystifies personal relationships. In the classic romance, the Other is only thought to really love you if they can anticipate your every need. If they can read your mind. This is actually an infantile version of love. A parent has to be able to discern the needs of an infant who doesn't have language to explain that his or her diaper needs changing, or has gas, or feels vulnerable and needs a hug, or whatever. It's a guessing game, but there's no choice. But once we develop language, you no longer have to guess what The Other is feeling, he or she can tell you.

There's a great scene in one of my favorite movies, As Good As It Gets, where Melvin is doing something very sweet for Verdell, the dog. An observer says "I want to be treated like that." What she's really saying is she misses being a baby. It's a fine feeling, because it really was, for many of us, great being an infant. But it's a bad basis for an adult relationship.

"Good fences make good neighbors" comes from a famous Robert Frost poem, and I think he's being ironic, but it's still true. If you want to really love someone, you have to always recognize that they are a separate person, and unless you ask, you do not know how they're feeling, or what they really mean, etc. The opposite is true. Unless you've said something, clearly, your husband or wife has no idea what you think. It's so boring to have to say everything, but that's how you stay sane and build trust.

Imagine a relationship as a circle, and draw a vertical line through it. On one side, write the other person's name, and write your name on the other side. You stay in your side, and they stay in theirs. You can touch, share, admire each other, tell jokes, share truths, but only from your side of the line. Once you put your presence in their body and start talking about what they think or see, you've just been invasive. It's a form of abuse, a violation, a psychic rape. Most relationships are complete sloshes, with people all over the place all the time, you never know who's where when. No wonder no one trusts each other! You can't trust someone if you never know when they'll show up inside your body, without permission.

Over the years, I've kept friendships with people who are good at this separation. It's how we get to be intimate, because it's safe to do so. The ones that have fallen away are those where the Other thinks they know what you really mean, even if it isn't anything you said or did. As Melvin said in As Good As It Gets, "this is exhausting." Keeping the Other on their side of the line, after a while, gets too much, and you choose to spend your time with other Others.

As Sting sang: "If you love someone set them free."

Same idea.

PS: I've been doing most of my writing last week on my Liveblog. I love it. At this point I think I will migrate my blog over there. Same old story on Scripting News, it's always in motion. For now if you want to keep up on my writing, you have to follow both blogs. There's a new tab on Scripting News with the full contents of the liveblog. So you don't have to travel very far to keep up. And of course the liveblog has an RSS feed.

PSS: This post is not about you.

February 22, 2015 02:59 PM

February 18, 2015

Dave Winer

Scripting News is a feed reader

I like to think my blog is interesting because of what I write here, but it's also interesting in how it works. And this is something, I think, anyone with a little experience in HTML can appreciate.

If you open most web pages of news sites, you'll see a combination of markup, things in <angle brackets> that tell the browser how to present stuff, and the text that makes up the page. Do a view-source on the home page of the New York Times for example.

If you do the same thing on the home page of Scripting News, you'll see something quite different. The content isn't there. Look more closely, near the top, there's a section of script code that says where it is:

var urlRss = "";
var urlRiver = "";
var urlLiveblog = "";
var urlCardFeed = "";
var urlFlickrFeed = "";
var urlAbout = "";

Each tab on the home page displays a feed, each from a different source of content: my main blog, liveblog, cards, Flickr photos, links, river, and an outline with information about the site.

In other words: Scripting News looks like a blog, but it's actually a feed reader.

February 18, 2015 04:17 PM

Ian Bicking

A Product Journal: Building for a Demo

I’ve been trying to work through a post on technology choices, as I had it in my mind that we should rewrite substantial portions of the product. We’ve just upped the team size to two, adding Donovan Preston, and it’s an opportunity to share in some of these decisions. And get rid of code that was desperately expedient. The server is only 400ish lines, with some significant copy-and-paste, so we’re not losing any big investment.

Now I wonder if part of the danger of a rewrite isn’t the effort, but that it’s an excuse to go heads-down and starve your situational awareness.

In other news there has been a major resignation at Mozilla. I’d read into it largely what Johnathan implies in his post: things seem to be on a good track, so he’s comfortable leaving. But the VP of Firefox can’t leave without some significant organizational impact. Now is an important time for me to be situationally aware, and for the product itself to show situational awareness. The technical underpinnings aren’t that relevant at this moment.

So instead, if only for a few days, I want to move back into expedient demoable product mode. Now is the time to explain the product to other people in Mozilla.

The choices this implies feel weird at times. What is most important? Security bugs? Hardly! It needs to demonstrate some things to different stakeholders:

  1. There are some technical parts that require demonstration. Can we freeze the DOM and produce something usable? Only an existence proof is really convincing. Can we do a login system? Of course! So I build out the DOM freezing and fix bugs in it, but I’m preparing to build a login system where you type in your email address. I’m sure you wouldn’t lie so we’ll just believe you are who you say you are.

  2. But I want to get to the interesting questions. Do we require a login for this system? If not, what can an anonymous user do? I don’t have an answer, but I want to engage people in the question. I think one of the best outcomes of a demo is having people think about these questions, offer up solutions and criticisms. If the demo makes everyone really impressed with how smart I am that is very self-gratifying, but it does not engage people with the product, and I want to build engagement. To ask a good question I do need to build enough of the context to clarify the question. I at least need fake logins.

  3. I’ve been getting design/user experience help from Bram Pitoyo too, and now we have a number of interesting mockups. More than we can implemented in short order. I’m trying to figure out how to integrate these mockups into the demo itself — as simple as “also look at this idea we have”. We should maintain a similar style (colors, basic layout), so that someone can look at a mockup and use all the context that I’ve introduced from the live demo.

  4. So far I’ve put no effort into onboarding. A person who picks up the tool may have no idea how it is supposed to be used. Or maybe they would figure it out: I haven’t even thought it through. Since I know how it works, and I’m doing the demo, that’s okay. My in-person narration is the onboarding experience. But even if I’m trying to explain the product internally, I should recognize I’m cutting myself off from an organic growth of interest.

  5. There are other stakeholders I keep forgetting about. I need to speak to the Mozilla Mission. I think I have a good story to tell there, but it’s not the conventional wisdom of what it means to embody the mission. I see this as a tool of direct outward-facing individual empowerment, not the mediated power of federation, not the opting-out power of privacy, not the committee-mediated and developer driven power of standards.

  6. Another stakeholder: people who care about the Firefox brand and marketing our products. Right now the tool lacks any branding, and it would be inappropriate to deploy this as a branded product right now. But I can demo a branded product. There may also be room to experiment with a call to action, and to start a discussion about what that would mean. I shouldn’t be afraid to do it really badly, because that starts the conversation, and I’d rather attract the people who think deeply about these things than try to solve them myself.

So I’m off now on another iteration of really scrappy coding, along with some strategic fakery.

by Ian Bicking at February 18, 2015 06:00 AM

February 17, 2015

Dave Winer

A new tab on Scripting News

A picture named blog.pngThis is so recursive, I can't tell you how confused I am, I can only imagine how confused others are. But it works, so here goes.

I just added a new tab to Scripting News, called Liveblog.

It is based on the RSS feed for my liveblog.

The liveblog, as its name suggests, is faster than Scripting News. It contains notes from my work, ideas that are too long for tweets, or that I want to come back to later.

The liveblog is more casual than Scripting News. Not sure what that says about Scripting News. I've always wanted it to be loose and friendly, but there you go. I am posting regularly to the liveblog now, so if you want to follow me, that should be on my home page too, it seems.

Hopefully at some point there will be a "singularity" event where it all collapses down to one thing. But for now, all my feeds meet up on

And yes, I plan to release the liveblog software. But it's a tricky bit, and I want to get it right before letting it out in the wild. It's can be hard to change software after it's released, and I'm still learning a lot about this.

PS: It seems like today is a good day to take a screen shot of the Scripting News home page. Continuously updated since 1997.

PPS: Thanks to my friend Hugh MacLeod for the excellent cartoon in the right margin of this post, from the early days of blogging.

February 17, 2015 05:03 PM

John Udell

Automation for the people: The programmer's dilemma

"Use the left lane to take the Treasure Island exit," the voice on Luann's phone said. That didn't make sense. We've only lived in the Bay Area for a few months and had never driven across the Bay Bridge, but I'm still embarrassed to say I failed to override the robot. As we circled back around to continue across the bridge I thought I heard another voice. It sounded like Nick Carr saying, "I told you so!"

In "The Glass Cage," Carr considers the many ways in which our growing reliance on automation can erode our skill and judgement. As was true in the case of his earlier book, "The Shallows," which grew out of an Atlantic Monthly article with the sensational title "Is Google making us stupid?," "The Glass Cage" has been criticized as the antitechnology rant of a Luddite. 

To read this article in full or to leave a comment, please click here

by Jon Udell at February 17, 2015 11:00 AM

Dave Winer on iPhone 6

A picture named mywordOnIPhone6.png

I think finally looks good on a phone.

February 17, 2015 12:15 AM

February 16, 2015

Dave Winer

The meaning of life

Yesterday I introduced two people on Facebook, people who I thought would very likely get off on each other's imaginations, humor, positive outlook. So of course I introduced them. I think there's real magic possible in these things.

There are billions of minds on our mote of dust suspended in a sunbeam. Very few of them ever get to engage. Maybe the secret of life is that there is a truth that can only be uncovered if two people connect and share something. They might not even know what it is. One person has a lock, the other a key.

Random things happen leading you to think of other things, and then you end up back where you started. Earlier in the day I had lost a huge file because I was editing it in a buggy app, still in development. Then, on Facebook, a question pops up. Maybe Google might have my lost OPML file in a cache somewhere. That led me on a train of thought back to 2002, when I published a post saying how cool it would be if Google would not only index OPML, but understand it. A very small amount of code would have been needed, but it isn't any code I could have written. They would have had to do it. So I did my best to explain why it would be a great idea. These things almost never happen, either no one is listening, or they think I'm too insignificant to matter (the fallacy of working at a huge company, they forget that people are the same size no matter where they work).

My whole career I've been asking for small favors from platform vendors, almost always being turned down, and then spending 5 or 10 years doing what they do just so I can put the teeny little bit of code in there that I wanted. Often the reason given is security, but it's usually not really that. It's being too busy to listen (I understand, me too, sometimes). Or a perception about the significance of the person doing the asking.

Maybe the two lovely people I introduced on Facebook, in collaboration, will figure out how to make something like this work. I think they're both the kind of people who would just say "WTF let's give it a try" if someone made a suggestion. I watch for that in people. It's the rarest quality, but they're the only people you can actually do stuff with!

BTW it would still be fucking awesome if Google would parse and display OPML. I have a great editor for it. And we need services to run at scale doing intelligent things with these structures. Oh the fun we could (still) have! I never give up.

PS: There were a few times people said yes to these kinds of things and incredible stuff did actually result. One was the NYT and permission to use their content for RSS. Another was Microsoft and XML-RPC, which led to the idea of websites having APIs. Something that's still shaking up the tech world today (though few people talk about it). NPR adopting podcasting is in the same class.

PPS: It happened in the inverse when Adam Curry tried repeatedly to get me to build a framework for podcasting in Frontier (long before it was called podcasting, btw). My mind was pretty closed to the idea, at first. But to both our credit, he persisted and eventually I heard what he was saying.

PPPS: A few days ago I wrote about awards for technology. Let's add another award. Best combination of ideas. Don't just reward people for being brilliant or brave, reward them for working with other people. The more diverse the interests, the more rewarding.

February 16, 2015 07:03 PM

February 13, 2015

Decyphering Glyph

According To...?

I believe that web browsers must start including the ultimate issuer in an always-visible user interface element.

You are viewing this website at Hopefully securely.

We trust that the math in the cryptographic operations protects our data from prying eyes. However, trusting that the math says the content is authentic and secure is useless unless you know who your computer is talking to. The HTTPS/TLS system identifies your interlocutor by their domain name.

In other words, you trust that these words come from me because is reasonably associated with me. If the lock on your web browser’s title bar was next to the name, presumably you might be more skeptical that the content was legitimate.

But... the cryptographic primitives require a trust root - somebody that you “already trust” - meaning someone that your browser already knows about at the time it makes the request - to tell you that this site is indeed So you read these words as if they’re the world according to Glyph, but according to whom is it according to me?

If you click on some obscure buttons (in Safari and Firefox you click on the little lock; in Chrome you click on the lock, then “Connection”) you should see that my identity as has been verified by “StartCom Class 1 Primary Intermediate Server CA” who was in turn verified by “StartCom Certification Authority”.

But if you do this, it only tells you about this one time. You could click on a link, and the issuer might change. It might be different for just one script on the page, and there’s basically no way to find out. There are more than 50 different organizations which could certify that could tell your browser to trust that this content is from me, several of whom have already been compromised. If you’re concerned about government surveillance, this list includes the governments of Hong Kong, Japan, France, the Netherlands, Turkey, as well as many multinational corporations vulnerable to secret warrants from the USA.

Sometimes it’s perfectly valid to trust these issuers. If I’m visiting a website describing some social services provided to French citizens, it would of course be reasonable for that to be trusted according to the government of France. But if you’re reading an article on my website about secure communications technology, probably it shouldn’t be brought to you by the China Internet Network Information Center.

Information security is all about the user having some expectation and then a suite of technology ensuring that that expectation is correctly met. If the user’s expectation of the system’s behavior is incorrect, then all the technological marvels in the world making sure that behavior is faithfully executed will not help their actual security at all. Without knowing the issuer though, it’s not clear to me what the user’s expectation is supposed to be about the lock icon.

The security authority system suffers from being a market for silver bullets. Secure websites are effectively resellers of the security offered to them by their certificate issuers; however, the customers are practically unable to even see the trade mark - the issuer name - of the certificate authority ultimately responsible for the integrity and confidentiality of their communications, so they have no information at all. The website itself also has next to no information because the certificate authority themselves are under no regulatory obligation to disclose or verify their security practices.

Without seeing the issuer, there’s no way for “issuer reputation” to be a selling point, which means there’s no market motivation for issuers to do a really good job securing their infrastructure. There’s no way for average users to notice if they are the victims of a targetted surveillance attack.

So please, browser vendors, consider making this information available to the general public so we can all start making informed decisions about who to trust.

by Glyph at February 13, 2015 08:52 PM

Dave Winer

Doors and extra bedrooms

Explaining yesterday, to Doc, in a tweet: "Ev built a nice house but didn't put a door on it. So I built one so I could have a door." This happens a lot in software, it turns out. Then I tweeted "This happened with Matt Mullenweg's house too. I needed a door, he wouldn't add it, so I had to build the whole house just to get the door."

That was necessarily abbreviated to fit in 140 chars. More accurately, I needed an extra bedroom for each post so the source code for the post could be stored alongside the rendered version.

Why the source code?

So what really happened without all the metaphors?

In the case of WordPress, let's say you want to make a great blog post editor, but you don't want to have to write a whole blogging system, or you want to let people use it with WordPress which is incredibly popular.

So the user creates a post, saves it, we render it, then send it to the blogging platform, WordPress. The author makes some changes, and it's still good, we just tell WordPress to update the post. But what happens if two weeks from now, the editor is long-closed, or the user is on another system, and they need to update the post? No can do. Because I don't have the source code for the post.

I can get the rendered version from WordPress, but we were working at a higher level. If we had had a place to put the source, alongside the rendered version, we would have been in business.

I asked, nicely I hope

I asked that WordPress allow me to store a bit of data along with the post. They did consider the idea, but explained that this might create security issues, so they couldn't do it. So I build a whole CMS, so I could have the editor I wanted. This post is written using that CMS.

The missing front-door on Medium

Re Ev's product, Medium -- I wanted a front door so I could hook their great rendering engine into my writing environment. I really shouldn't be doing what I'm doing with, they should just provide an easy way for other tools to hook in.

What I have is 90 percent of the effect people are looking for, and it's good for the web. An ecosystem could develop around it. I know Ev doesn't believe in software ecosystems. Fine. Let's see if he's right.

I know there are great CSS guys out there who don't work for him. is open source, so anything can happen. And maybe writers will help this part of the web stay free from Silicon Valley siloization. I know it's a dream, but I'm a dreamer.

February 13, 2015 06:18 PM

It's time to finish Twitter

Tweets are objects

Twitter calls them tweets, but in programming terms, they are objects.

Objects have attributes. If an object has a picture attached to it, that should not be represented as part of the text attribute of the object. There should be a separate attribute for each picture. Yet, totally for historic reasons, that's how it works in Twitter.

Note that in the API they are objects. It's just in the UI that they aren't. That seems wrong.

Hyperlinks are mature technology, let's use them

When Twitter was invented we already had conventions for embedding addresses in text. They are called hyperlinks. I think Twitter's designers should decide if they want to have hyperlinks, and if so they should support them the way the web does. There was no good reason to make it work differently. I guess at the time Twitter had problems scaling their servers, so that's where their attention was focused. But those days are behind us.

Neaten up tweets, get rid of the dangling URLs

Seriously, the onboarding process of Twitter is made more difficult and confusing for people because of all those URLs dangling all over the place. When showing a non-technical user Twitter you have to tell them to ignore those things. Or try to explain why some tweets have two URLs and what they mean. It's 2015. It's almost ten years. Wouldn't it be great to finish this product, really, before it turns ten?

Make images an attribute (actually in some cases I think they are). Or allow text to be hyperlinked. Even better: do both.

140 character limit? Give it up

And while you're at it, let us put more than 140 chars in a tweet. Yes I know the drill, it's supposed to be better if people are forced to be concise. To which I say "Have you looked at your timeline recently?" There are lots of ways to not be concise, and people are using all of them. Give it up. It's over. And btw there are plenty of important, good ideas that require more than 140.

Hashtags are intimidating, they could be easy, even fun

Another thing: It should be easy to ask Twitter what a given hashtag means. If necessary use Mechanical Turk to implement this. In this area, Twitter is mysterious and difficult for newbies and experts alike. These are easy fixes. At some point Twitter stopped improving. Use some of the resources you have to fix them! It's long overdue.

Hashtags are not some obscure technical artifact, as they were when Twitter started. They are now part of human language. Look at ads on buses if you don't believe me. I think baseballs have hashtags on them now!

If Twitter made hashtags easy people would cheer hooray! when Dick Costolo walked into a room. (Not that they don't already.) Hey you could even make the definitions funny.

February 13, 2015 04:16 PM

The Dude hates The Eagles

This kind of shit happens all the time.

February 13, 2015 03:52 PM

David Carr

We met on October 16 last year at a theater district bar in NYC.

It was a getting-to-know-you meeting. Lively conversation, even though our points of view were very far apart. I liked him. It's so sad that he's gone.

Whenever a Carr column popped up in my river I read it from beginning to end.

February 13, 2015 03:50 AM

February 12, 2015

Dave Winer

Something fun I whipped up

I have a new writing tool I'm working on and wanted it to be easy to create beautiful essay pages from it, like Medium. But Medium doesn't have an API. So I made an essay-viewer that has one.

Now, I'm not a great CSS hacker. That's obvious. But all that's needed is a framework to get started. That's the idea.

I'm going to keep iterating over it. I really like the way it feels. And if you have any ideas on how to improve it, please feel free to fork.

How to make an essay of your own

  1. It's not pretty.

  2. You have to type in a technical language called JSON. It makes sense and it's pretty simple, but it's also exacting. If you misplace a quote or a comma it will fail. If you've been thinking you want to learn to code, this would be a very easy way to start. (Another way of saying that programming isn't easy.)

  3. You must have a way of storing a file in a public place with an http:// address. I like to use the Public folder in Dropbox.

  4. Start with one of the example JSON files, for example this one.

  5. Edit it, and upload it. Suppose the new address is this:

  6. You can get to display it with this URL:

That's really all there is to it!

An example that works

February 12, 2015 06:58 PM

No one hears the anti-vaxxers

Note: Small spoiler re Boyhood movie at the very end.

I read a story on Joey de Villa's blog yesterday about an over-55 community in Florida where people have wild consensual sex all the time. Being over 55 myself I have to say it sounded interesting.

I'm getting ready for something different. I like programming, and I'll keep doing it for sure, as long as my right arm holds up (I'm having pretty radical RSI pain right now). But I'm not trying to change the world, or make a billion dollars. I have no expectation that I can change the world, and no great ideas on how to do it either. And I don't need more money than I have saved, I think. The stock market been berry berry good to me. (You have to be of a certain age to get that joke.)

Anyway the retirement community story said something. They've thought of everything by now. There isn't room to be innovative in lifestyle anymore. Back when I was a kid, that's what we thought we were doing. Creating something new in life. But the hippies are either gone or very old. Did you see what Joni Mitchell says about Bob Dylan? She looks like my grandmother! There's just a cranky old person being cranky. No the hippies didn't really break through in lifestyle. We all got old anyway.

Here's the thing: it's got to be even worse for young people with kids today. That feeling that everything is all figured out. There isn't any need for me to think up anything creative to do with my life. If I try, I just find out that someone thought of it long ago, and it has a price tag. So you look for little ways you can be different. Unfortunately not vaccinating kids turned out to be one of them. It could have been not eating cheese, something harmless. But it was vaccinating kids.

I thought Obama understood this, btw, and had ideas for how to involve all of us in making the world work better. If the US can elect a black president, can't we lead the world in making a difference, as people? Nah, he didn't get any big ideas until Year 6 of his presidency, when it's almost too late. And his big ideas didn't turn out to be very great either. Just that he could be an asshole to the Republicans and they might relate to that, appreciate that, respect that. It was true back at the beginning and he should have been doing that all along. And it would have been nice if each of us had a way to feel our life had meaning, that we were making a difference, and being creative, heard and understood -- by someone, anyone. I think that's the real crisis of our times, in the first world at least.

The mom in Boyhood said it best: "I just thought there would be more."

February 12, 2015 02:27 AM

February 10, 2015

Reinventing Business

Money is an Abstraction of Time

My friend Nancy sent me this in an email, and allowed me to post it here:
Money is basically an abstraction of time.  Time is real (to our physical, corporeal selves), money is imaginary.  Sort of a 'willing suspension of disbelief' construct that makes the material side of modern life function.  But at a certain point, money, like Newtonian physics, breaks down.  Because everyone knows that on a personal level, time is a relentless pursuer.  Material stuff is evanescent. Remember back when Adobe put all of the engineers' names on the splash screen of a project?  The business psychology of that was that ownership was important to creators.  But really, underlying that idea is something that looks like a straight across trade: time for time.  Chris told me recently that there is a saying that you don't die until after the last time your name is spoken. Offering a creative person money in exchange for being buried alive in a cubicle isn't any kind of trade.  It's more like indentured servitude.  Pile a bunch of non-disclosure/non-compete agreements onto that and it looks more like slavery...
I also see money as a social agreement. It is a way to transport value, to trade the value I produce for the value you produce. For this reason I find the ways that bankers and stock market traders come up with to steal this value particularly reprehensible; it only takes value out of the system without adding anything (caveat: not everything bankers and stock traders do falls under the "stealing" category, of course; many of them provide real value).

by (Bruce Eckel) at February 10, 2015 10:29 PM

Dave Winer

We could have an open, user-controlled, ad-free Facebook

The other day my dear friend NakedJen was waking up to the power Facebook has because we use their system. She saw an endorsement by her friend, in the right margin on Facebook, of a product. It had her picture on it. She wondered if her friend had been paid for the endorsement, or even consulted. While I don't know for sure, I think the answer is "neither." Facebook has the right to do that. I'm sure it's in the user agreement. Which we all agree to, or we wouldn't be using Facebook.

The conversation continued.

I told her that I had stayed off Facebook for years because I didn't want to appear to endorse this system. But eventually the battle was lost, and my holding out wasn't accomplishing anything other than cutting me off from a social phenomenon. If I wanted to develop software in a post-Facebook world, if I didn't understand Facebook, my software would be missing an important historic precedent. Facebook exists, and nothing I can do can change that. So I might as well join the party, and I did, and no regrets.

Thing is, we've already ceded this kind of power to Google with video via YouTube. I heard a report on NPR on Sunday that was very depressing, an interview with a musician saying that YouTube had given her an agreement, take it or leave it, that said either you sign everything over to us, or you can't be part of YouTube. I didn't get the full story, just the gist. She said she thought the Internet was going to free us from the music industry. But it didn't do that. The music industry has rebooted, on the Internet. The money just flows to different bank accounts now.

You can see the process of ceding control happening right now, as essayists post their stories on Medium instead of their own blogs. There seems to be an assumption that you get more flow if you do this. I kind of doubt it. But even if there were more flow, you're ultimately forcing all of us to accept a deal that's probably going to be as bad as or worse than the one Facebook and YouTube have given us. Why? Because the lawyers and entrepreneurs of tech are learning, and they're getting better at grabbing, and users are not acting in their self-interest, any more than they were when Facebook and YouTube were taking over.

But even today it's not too late. Because economically and technically, we could reproduce what we have on Facebook on open systems, where everyone controls their own space, without signing over the kinds of rights that make users feel used. We saw some of the enthusiasm when Diaspora launched a few years ago, but they were college students, and weren't realistic about how to bootstrap such a system. You might say But Zuck was a college student when he booted up Facebook. But that system got to grow slowly, and their mistakes weren't exposed so quickly because they were small when they happened. If you wanted to boot up something that would do what Facebook does, today, you'd have to be prepared for a much bigger user community, almost immediately.

But what is Facebook, really? Where is the value in it, and is that so hard to reproduce? Seems to me it's basically a discussion board with a network data structure called "the graph." It's FriendFeed 2.0 (the current Facebook was designed by the creator of FriendFeed). There doesn't appear to be any rocket science in there. Maybe there is and I'm missing something. (There's no mystery to a graph. When I was a math undergrad I studied graph theory and wrote software that processed graphs. Long before there was a Facebook.)

Economically, there are huge economic resources that can be marshalled by users pooling their money. This isn't speculative, the money does flow. And I don't think each Facebook user consumes all that much in the way of computing and storage resources. $100 a year perhaps? Would you be willing to pay that to control your online destiny? You probably pay half that each month to your ISP.

It seems to me that all that's needed is the will to do it. By a few developers, and by a few users, to get a bootstrap started.

I'm not advocating anything. This isn't a proposal of any kind. But I thought about this the other day and asked myself the question -- is it possible? And I decided it is possible. So I thought, being a blogger and a developer and a user, as I am, that I should say that.

February 10, 2015 03:05 PM

February 09, 2015

Tim Ferriss



Matt Mullenweg has been named one of PC World’s Top 50 People on the Web,’s 30 under 30, and Business Week’s 25 Most Influential People on the Web.

In this episode, I attempt to get him drunk on tequila and make him curse.

Matt is most associated with a tool that powers more than 22% of the entire web: WordPress. Even if you aren’t into tech, there are many pages of “holy shit!” tips and resources in this episode.

Matt is a phenom of hyper-productivity and does A LOT with very little. But how? This conversation shares his best tools and tricks. From polyphasic sleep to Dvorak and looping music for flow, there’s something for everyone.

Last but not least, Matt is also the CEO of Automattic, which is valued at $1-billion+ and has a fully distributed team of 300+ employees around the world. I’m honored to be an advisor, and I’ve seen how they use incredibly unorthodox methods for jaw-dropping results.

But… he started off as a BBQ-chomping Texas boy with no aspirations of empire building. How on earth did get here? Just listen and find out. It’s one hell of a story.

TF-ItunesButton TF-StitcherButton


This episode is sponsored by OnnitI have used Onnit products for years. If you look in my kitchen or in my garage you will find Alpha BRAIN, chewable melatonin (for resetting my clock while traveling), kettlebells, maces, battle ropes, and steel clubs. It sounds like a torture chamber, and it basically is. A torture chamber for self-improvement! Ah, the lovely pain. To see a list of my favorite pills, potions, and heavy tools, click here.

This podcast is also brought to you by 99Designs, the world’s largest marketplace of graphic designers. Did you know I used 99Designs to rapid prototype the cover for The 4-Hour Body? Here are some of the impressive results.  Click this link and get a free $99 upgrade.  Give it a test run..

QUESTION(S) OF THE DAY: What’s the best productivity tip or tool you’ve implemented in the last year? Please let me know in the comments.

Scroll below for links and show notes…


Selected Links from the Episode

Show Notes

  • How WordPress started | The origin story
  • Defining “open source”
  • How WordPress beat their competition and how to beat the complicate-to-profit business model
  • The long term outlook and core product characteristics that has empowered the growth of WordPress
  • Describing Automattic, and how it was founded with a purpose to kill spam
  • Experiments in polyphasic sleep, girlfriend complexities, and Dvorak typing
  • How is Automattic differs from the average tech startup, and challenges of a distributed workforce
  • Thoughts on where to draw the transparency line when running an open-source company
  • Delving into the secret benefits of tequila
  • Matt Mullenweg’s useful laptop and smartphone apps
  • Turning it around on Tim: Intermittent fasting and distilled water fasting?
  • Overworking vices, creating “de-loading” phases and saying “no” to meetings
  • Why we don’t care about the color of the bike shed
  • Musical skills that support coding and other leadership skills
  • Why Matt listens to familiar songs on loop when working
  • Hiring tips: Auditions at Automattic, why use them, and how they work
  • Matt’s view on top-grading
  • Most gifted books
  • Learning to love running
  • Answering Twitter questions: Bootstrapping vs. seed money if starting in 2015, picking a badass suit and last great purchase for less than $100
  • Packing tips
  • The story of losing an investor’s check (nearly a $400,000 mistake)
  • The story behind eating 104 Chicken McNuggets
  • First person to come to mind when you think “successful”?
  • Suggested investing books
  • The role WordPress will play in online content outside the browser (mobile apps, API, etc.) in the near future
  • Books and resources for the 20-year old entrepreneur looking to start a company
  • Stranded on a desert island? Albums and what else?
  • Advice for your 20-year old self?

People Mentioned

by Ian Robinson at February 09, 2015 10:22 PM

Dave Winer

Honoring developers and products

This began as an outline on my liveblog, but I felt the idea deserved more attention.

The Crunchies are a product of what I call the VC-based tech industry. But that's not the only tech industry. There's so much more going on here than bankers and advertising. I started making a list of awards I'd like to see, here it is, with some ideas.

Open format and protocol of the year

  • Past honorees would include HTTP and BitCoin, as examples.
  • Hackathons please include these among your commercial sponsors' APIs. It's important to build around open formats and protocols too.

Hall of Fame

  • People or products that influenced all that came after. We have a lot of catching up to do here. For me, the big ones are the C programming language, the PDP-11 machine architecture, Unix, Visicalc, 1-2-3, the Macintosh and IBM PC, Mosaic, Flickr, Twitter.
  • Like sporting halls of fame, we should also include leaders who made a difference, and journalists who covered our work intelligently and with care.

Giving back

  • Giving money to hospitals is great. But let's honor technologists who got rich and gave back to the ecosystem that produced the technology that made their work possible. This would be pure philanthropy, not embrace and extend. There is very little of this, but awarding it would create incentives for there to be more.

User trust

  • What company gave the users freedom to switch. The bigger the trust, the more we want to honor you.

Best commercial API

  • We hardly ever compare them. Last year I discovered there was a huge diff betw Twitter's and Facebook's APIs.

February 09, 2015 06:05 PM

John Udell

Wiki creator reinvents collaboration, again

Almost 19 years ago, when I was Byte magazine's executive editor and Web developer, I received the following email:

From: Ward Cunningham
Subject: wiki
Date: May 22, 1996 at 11:23:10 AM PDT
To: Jon Udell
Jon -- So what do you think of wiki? I put it up a year and a half ago 
when HTML authoring tools were in short supply. I think it has held up 
pretty well, though I suppose its days are numbered. Still, there is 
something about it we call WikiNature that is in short supply on the net 
and in computers in general. Regards. -- Ward
[ See what hardware, software, development tools, and cloud services came out on top in the InfoWorld 2015 Technology of the Year Awards. | Cut to the key news in technology trends and IT breakthroughs with the InfoWorld Daily newsletter, our summary of the top tech happenings. ]

What I thought when I saw that wiki was that maybe I should stay quiet about it. The Portland Pattern Repository was a collaborative effort to define the practices and principles that inform the agile software movement. And the wiki Ward mentioned -- the world's first! -- was the petri dish in which that culture grew. It was wide open. Anyone could edit a page. Those who did conversed eloquently and distilled a set of memorable patterns:

To read this article in full or to leave a comment, please click here

by Jon Udell at February 09, 2015 11:00 AM

Blue Sky on Mars

This Uncanny Valley of Voice Recognition

Ever since Star Trek first aired, we've held unreasonable expectations over computers. Not only do we expect them to work for free, but we expect them to listen to all of our problems.

Hey Siri, what’s the temp?
Calling “mom”.
No. Cancel.

Siri, cancel. Stop. Siri stop. Siri eat a dick.
Hi mom! Uh, I’ve missed you!

We’ve reached the Uncanny Valley of voice recognition, and everyone except these fucking computers knows what I’m talking about.


The Uncanny Valley is a term that originated from the computer animation industry. In 1992, while finishing A Bug’s Life, Pixar had to build a digital valley for Buzz Lightyear to drive his Ford® F-150™ pickup through on the way to the hospital so he could get a vasectomy. Pixar’s staff found that the valley looked uncanny, meaning it looked good, but not perfect. They ended up illustrating a crate of Campbell’s® Tomato Soup™ in the corner to make it feel a bit more canny.

The concept of The Uncanny Valley was born. If an emulation doesn’t quite match up to how a behavior works in the real world, humans can easily pick up on this discrepancy and they’ll get pissed off and start an angry hashtag on Twitter.

This never worked until now

Look, we all grew up with those ads in our Yahoo! Internet Life magazine subscriptions for that Dragon's Den voice recognition software thing that never fucking worked but they still inevitably got testimonials from lawyers in blue power shirts who beamed through their dorky microphone headsets while they happily dictated like a robot the winning litigation strategy for their client who nonetheless got caught soliciting a federal agent for counterfeit laundry detergent and also some sexual favors but they’ll get off because they “have a fraternity brother in Justice who can pull some strings”.

Yahoo! Internet Life

We laughed at that back then because we had some real dope inventions at the time: like, keyboards and shit. Have you tried one? You can put words on a screen real fast compared to flapping words through your oral meatflaps. It’s also beneficial while you’re writing your steamy 50 Shades of Grey fan fiction in your public library. I mean, to make it less creepy you could try whispering the words to your computer instead of loudly dictating them, but are you sure that doesn’t make it more creepy?

Voice recognition used to be a gimmick.

Things done changed

The last year or two have seen some real mass-market changes, though. Apple has Siri, Google has OK Google or something, Amazon has an entire standalone device in Echo, and Microsoft is pushing for voice control in Xbox One with Kinect.

The difference is that these things finally do some cool stuff. We’re not dictating litigation strategy to our secretaries; we’re interacting with our devices in real ways. It kinda blew my mind that I can walk into my living room, say “Xbox on”, and my Xbox turns on, my TV gets switched on, the input get changed over to my Apple TV, and it’s all ready to watch by the time I reach my chair.

Voice intelligence

The problem is we’re at this uncanny valley. We want to talk to our devices like humans, but they still act like toddlers wearing headphones who only speak Portuguese or something.

If I’m playing music in the background, my Xbox has a tough time identifying what I’m saying. It’s not a mistake a human would readily make.

If you ever end up in bed with someone (congrats!) with both your iPhones plugged into the wall and one of you wakes up and asks “Hey Siri what’s the weather like today”, you now have two iPhones — in addition to any iPads laying around — dutifully responding at the same time. iPhones understand words, but they don’t understand you. It’s not a mistake a human would readily make.

When voice recognition works, it’s great, but when you have to repeat yourself or it just doesn’t understand you, the level of frustration feels much higher than other software. Why can’t you understand me, you dumb robot?

Words as UI

Part of this frustration is the user interface itself is less standardized than the desktop or mobile device UI you're used to. Even the basic terminology can feel pretty inconsistent if you’re jumping back and forth between platforms.

Siri aims to be completely conversational: Do you think the freshman Congressman from California’s Twelfth deserved to sit on HUAC, and how did that impact his future relationship with J. Edgar?

Xbox One is basically an oral command line interface, of the form:

Xbox <verb> (direct object)

For example: Xbox, go to Settings. But this is not the case if it’s in “active listening” mode, in which case you drop the Xbox and attempt to address it conversationally (go to Settings). But you can’t really converse with it, because it’s functionally less capable than Siri or Google Now. The context switching is a little frustrating. On the other hand, since it's so cut-and-dry, there's less of an uncanny valley here because I don't personify my Xbox as much as I do Siri; my Xbox just responds to commands. Funny how a different voice UI here results in a totally different experience.

Amazon Echo

Amazon Echo’s UI is similar to Siri’s conversational form, although you’re almost always going to invoke it by saying Alexa (whereas you can bypass Hey Siri by holding down the Home button and talking normally — a beneficial side effect of having the device in-hand).

There’s good reasons for all these inconsistencies — Xbox, for example, benefits from clear, directed dialogue because there’s fewer functions you require once you’re sitting in front of a TV. But it’s these inconsistencies that are frustrating as you jump back and forth between devices. And we’re only going to scale this up, particularly at home, because again, when this all works it’s awesome. I want to control entire workflows in my home by voice- hey I’m heading out might turn down my thermostat, turn off my lights, and check that my oven’s turned off.

It took decades before computing settled on the standard concepts of the GUI: the desktop metaphor, overlapping windows, scrollbars, and so on. Hopefully voice UI catches up and standardizes, too.

it’s gon b creepy

Voice recognition, if it ever crosses the chasm of the uncanny valley, is going to have to get much smarter. And I don’t mean preprogrammed jokes about where to hide a dead body, but our voice assistants are going to have to start learning about us. Building a relationship with us. Knowing us.

And that’s going to be creepy. Especially if we don't trust who's on the other end of the line. Maybe.

Her (2013)

I think the reason people liked Her (2013) so much was that it didn’t seem all that creepy. It seemed like you're gaining a friend. And it’s going to be weird at first, since it’s going to need to be always-on, always listening, and always learning from you. But if we can ever jump past this uncanny valley, that’s where we’ll basically build AI, for all intents and purposes, and we’re going to have a friend following us around. And it’s going to make life better for us.

Well, depending on which science fiction you watch. We could all end up depressed or die, too. Hug a fan of Black Mirror or Transcendence (2014) today.

We're probably still a long, long way from crossing the Valley to our utter doom or sublime utopia, since computers are hard and voice recognition is apparently really hard. (Or at least that's what I assume; I just do fake computer science like User.find() so I wouldn't know, myself.) We're going to have to deal with this uncanny valley in the meantime. That's a little frustrating, but hey, at least we don't have to wear boom microphone headsets anymore.

Exciting stuff is afoot.

February 09, 2015 12:00 AM

February 08, 2015

Dave Winer

Help with Twitter metadata?

Here's a link to my new liveblog software (still very much in development).

Try viewing that link in Twitter's card validator.

I see an error that isn't help me to figure out what's wrong.

ERROR: FetchError:exceeded 4.seconds to Portal.Pink-constructor-safecore while waiting for a response for the request, including retries (if applicable) (Card error)

If you have experience with Twitter card metadata, do you have any ideas about how I can get this working? I'd really like to have links look as good in Twitter as they do in Facebook. (That's a requirement, not a like.)

Thanks in advance!

I'm trying to think but nothing happens!

Update -- found the problem

Marco Fabbri confirmed the metadata was valid by storing it on another server and it validated. He said to check if I was getting a call from twitterbot. His theory was my Heroku server was blacklisted because of another app running on the same physical machine. This theory turned out not to be correct, but it led me to the fix.

  1. I was getting a request from Twitterbot, for /robots.txt.

  2. My little server app wasn't doing anything special for that file, and it thought it was a twitter user, so it tried to fetch its reader app, opml file, etc, and got lost.

  3. Twitter got a timeout for the /robots.txt call.

  4. It barfed.

The fix.

  1. Add a special case for "/robots.txt" and return a 404.

Result -- twitter love!

February 08, 2015 06:28 PM

February 07, 2015

Dave Winer

The unedited voice of a person

People use blogs primarily to discuss one question -- what is a blog? The discussion will continue as long as there are blogs.

It's no different from other media, all they ever talk about is what they are. We got dinged by the NY Times because all bloggers talked about at the DNC was other bloggers. But what were they busy doing -- talking about other reporters, except when they were talking about bloggers -- talking about bloggers.

Nothing wrong with it.

In the early days we joked that they were watching us watch them watch us watch them. And so on.

In 2003, when I was beginning my stint as a fellow at Berkman Center, since I was going to be doing stuff with blogs, I felt it necessary to start by explaining what makes a blog a blog, and I concluded it wasn't so much the form, although most blogs seem to follow a similar form, nor was it the content, rather it was the voice.

If it was one voice, unedited, not determined by group-think -- then it was a blog, no matter what form it took. If it was the result of group-think, with lots of ass-covering and offense avoiding, then it's not. Things like spelling and grammatic errors were okay, in fact they helped convince one that it was unedited. (Dogma 2000 expressed this very concisely.)

Do comments make it a blog? Do the lack of comments make it not a blog? Well actually, my opinion is different from many, but it still is my opinion that it does not follow that a blog must have comments, in fact, to the extent that comments interfere with the natural expression of the unedited voice of an individual, comments may act to make something not a blog.

We already had mail lists before we had blogs. The whole notion that blogs should evolve to become mail lists seems to waste the blogs. Comments are very much mail-list-like things. A few voices can drown out all others. The cool thing about blogs is that while they may be quiet, and it may be hard to find what you're looking for, at least you can say what you think without being shouted down. This makes it possible for unpopular ideas to be expressed. And if you know history, the most important ideas often are the unpopular ones.

Me, I like diversity of opinion. I learn from the extremes. You think evolution is a liberal plot? Okay, I disagree, but I think you should have the right to say it, and further you should have a place to say it. You think global warming is a lie? Speak your mind brother. You thought the war in Iraq was a bad idea? Thank god you had a place you could say that. That's what's important about blogs, not that people can comment on your ideas. As long as they can start their own blog, there will be no shortage of places to comment. What there is always a shortage of, however, is courage to say the exceptional thing, to be an individual, to stand up for your beliefs, even if they aren't popular.

I sat next to Steven Levy the other night at dinner in NY. He volunteered that in his whole career he had never written a word that wasn't approved of by someone else, until he started a blog. I applaud him for crossing the line. I give him a lot of credit for writing without a safety net. It really is different. Comments wouldn't make the difference, what makes the difference is standing alone, with your ideas out there, with no one else to fault for those ideas. They are your responsibility, and yours alone.

For me, the big rush came when I started publishing DaveNet essays in late 1994. I would revise and edit, for an hour maybe more, before hitting the Send button. Once I did that, there was no turning back. The idea was out there, with my name on it. All the disclaimers (I called the essays "Amusing rants from Dave Winer's desktop") wouldn't help, if the ideas were bad, they were mine. But if they were good, they were mine too. That's what makes something blog-like, imho.

Note: This is a re-run of a post from 2007. On-topic in light of the "blogging is dead" debate. This is what a blog is, imho. DW

February 07, 2015 04:34 PM

Semantic Programming


In the last month, I've been catching up with where HTML5/CSS/JavaScript have gotten to, and wow, how did I miss the introduction of contenteditable="true" !!

I was reluctantly re-looking at the web as the provider of the graphical front-end after realising that it really did best fit my three main criteria for the UI: 1) independently threaded client side rendering; 2) wide adoption across many client devices; and 3) general expressiveness of possible UIs.

However, I was doing this reluctantly as there are many aspects of the original 'post-back' model of web interactions that I have grown to detest and have generally felt that the JavaScript 'solutions' to fix this were like a sticky tape and string hack on-top of an out of date model for UI construction.

But most of all I dreaded the imagined process of building a credible WYSIWYG editing experience. 

Somehow I assumed that all of the tools that achieve this on the web today (Google Docs etc) were doing some kind of complex, proprietary rendering and overlays approach using JavaScript to create the impression of dynamically editing a document. I had imagined all sorts of horrors like JavaScript timers drawing the caret and removing it to give the impression of it blinking, etc.

Not for a moment did it occur to me that 'they' might have built WYSIWYG editing into the web! I started looking at all sorts of WYSIWYG plugins and libraries and unpicked the code in each trying to learn the complex magic that underpinned all this imagined wizardry until I realised that all the (recent!) such editors are relatively simple and rest on contenteditable="true"

And, most importantly, this feature has been widely adopted and supported and so it really is a credible route to go for building an online content editor. So, now, despite the web's other imperfections, my attitude has switched 180 degrees and I am really enjoying building a prototype web based Semprola editor!

Indeed, I now cannot imagine the horror of building the UI any other way!

by (Oli) at February 07, 2015 02:50 PM

February 06, 2015


Why Synereo?

A new decentralized, distributed social network is emerging, and naturally people are curious. Gideon Rosenblatt asks a key question: Why Synereo instead of Facebook?

i provide an answer here in six basic points that cover architecture and information flow consequences for resiliency, autonomy, and privacy, as well as important aspects of the user experience, user compensation and the attention economy.

TL;DR: you are the network, not the product.

1) A distributed, decentralized architecture is more resilient against certain kinds of attacks. From a hacker hacking a centralized service and scooping up all the credit card data from a single database to a government shutting down a service providing information counter to the incumbent narrative, a distributed, decentralized architecture is more resilient against these kinds of attacks. Clearly, this is not the architecture that Facebook enjoys or promotes. It can't because its revenue model wouldn't work well in this setting.

2) Above this architecture, Synereo provides a qualitative notion of identity. This is not about identity as token because that notion of identity doesn't serve individuals who have rich internal multifaceted presence. What is needed is to be able to reveal enough about oneself to participate legitimately in a conversation without necessarily revealing information that is either sensitive or irrelevant. Consider a self-governance or participatory democracy process like the budget games. Were these conducted in an online situation people might be less likely to participate if the games revealed information about their political views that made them the target of unwanted attention. In the US such issues have included healthcare reform and gay marriage. On the other hand, a government representative needs assurances that the online participants are actually registered voters in their districts and not bots. Synereo allows users to reveal enough identity-related information to participate in a variety of processes without necessarily provding personally identifiable information. If someone is pursuing employment from an employer that is ok with them working remotely, then the employment application process doesn't have to reveal the candidates' residential address. Again, this approach doesn't fit with Facebook's revenue model.

3) Synereo makes strong guarantees about never letting Synereo code see user data in the clear (unless users' give explicit permission). Despite these guarantees, Synereo provides a sophisticated search mechanism that allows users to search content throughout the slice of the network visible to them. Facebook just rolled out a graph search. Many people are very concerned about what this means about privacy. If you would like to know more about how Synereo achieves this, awesome! Please contact me and i will personally explain the mechanism in as much detail as you would like. Beyond this Synereo is based on a mathematical model of decentralized and distributed computing that allows for the specification and enforcement of information flow policies. The language for these policies is exceedingly rich -- considerably richer than friends, friends-of-friends, etc. These policies can be autonomously developed and assembled into larger policies. The polices can be checked for desirable properties. This provides the basis for smart contracts that allows for just-in-time assembling of services from subcontractors. In other words, Synereo can help groups assemble temporarily to form an organization to complete a task or provide a service. This level of sophistication just doesn't exist in Facebook. It doesn't even exist in Ethereum. The Synereo white paper describes an example of information flow policies as relates to public health and public safety. We believe Synereo's feature set makes it an ideal mechanism for participatory self-governance.

4) Synereo provides the attention economy model. This allows participants in the network to begin to get some of the reward for their participation. It's not just that the monetization of attention deployed in social networks is commonly estimated in the 10's of billions of USD. Nor that Facebook users never see a penny of that. It's also that as a result of the shift in distribution mechanism the creative classes are under unbearable pressure. When a guitarist of the stature of David Torn gets 8USD from Pandora for 100,000s of plays of one of his tunes, making a living as a guitarist becomes nearly untenable. This is not a one time occurrence. Read Zoe Keating's article about how YouTube is treating her as they roll out their new music service. This same scenario is playing out in journalism, photography, etc. The creative classes are under siege. The attention economy provides a direct way for creative people to earn compensation for their creative outpouring. Facebook simply doesn't have such a model and it would directly undermine their existing revenue model.

5) Synereo provides users with a new level of control over what shows up in their feed. At this level of detail we can say that Synereo identifies two basic types of entities, ports and filters. A port can either be a source of information (like one of your friends from whom you receive posts, or an rss feed to which you are subscribed, or a mailing list to which you are subscribed, or a device on the Internet of things, or ...) or a sink for information (like one of your friends, or an rss feed to which you publish, or ... ). A filter is something applied to the stream of information flowing into or out of a port to further refine what goes into or comes out of the stream. Notice that when you combine a filter with a port you get a new port. This kind of 'algebra' of information flow components gives users a new toolkit for controlling what they see in their feed. It also has fairly natural UI metaphors that have a long history of success as a means of conveying the presentation of streams of events. This leads is to the last point.

6) Synereo provides new UI experiences that allow for users to see more about the dynamics of information flow. As a simple example, Facebook organizes the posts of all your friends into a single interleaved stream. If you were to uninterleave this into several streams that were synced by timestamp you would see how your friends' posts were related to each other in time. Picture in your mind's eye 5 timelines running left to right. One timeline for one of 5 friends. Each timeline provides an iconic representation of that friend's posts. When posts stack up in a near vertical line it means that your friends are posting at the same time. This gives you a way to see into the temporal dynamics of the communications of your group of friends. The reason i chose 5 is because there are 5 lines in a single musical staff. Just as the vertical stacking of notes on a staff represents play these notes at the same time, i.e. play a chord, chords in the 'score' representation of timelines allow for people to see how peoples actions are related in time. Any 6yr old can learn to read the temporal dynamics off of a single staff. A good conductor can read and hear in her mind 7 staves. So, the surface of inborn information processing capacity of the human mind for processing temporal dynamics is not even being scratched, let alone the depths plumbed. Facebook does not actually have a vested interest in users being able to utilize their capacity. They have a vested interest in directing attention in a way that maximizes return for their investors.

by leithaus ( at February 06, 2015 04:18 PM

February 05, 2015

Dave Winer

Why nodeStorage is a big deal

This is the story of nodeStorage.

In April last year I decided it was time for me to get my Twitter act together in my new JavaScript-based work environment. Back when I was working primarily in Frontier, and before the great breakup with Twitter and app developers, I had a pretty easy Twitter programming interface. I wanted the same thing for apps written in JavaScript in the browser.

It took a total of about two months from beginning to end to get it all working and to get a few apps built on top of it to prove that I had a complete interface.

Then I got interested in Facebook, and realized I'd have to do the same thing for it, and when I started I figured it would take about two months, the same amount of time I had spent on Twitter. Nope. It took two days. That's because Facebook had written a special library for browser-based JavaScript apps that hides all the details of connecting with Facebook from the browser.

This has value

At that point I realized that what I had in my glue for Twitter had value on its own. There was no other Node.js package that was as complete or easy. So I spent some time cleaning it up and adding S3-based storage (all apps need storage), and last month I released it as MIT-licensed open source.

That's nodeStorage.

Why it's a big deal

  1. It makes Twitter as easy to program in browser-based JavaScript as Facebook.

  2. It adds storage, which even Facebook doesn't offer.

It takes the Twitter API, which was significantly less easy than Facebook's and gives it parity, and adds an essential feature, making app development on top of the Twitter API incredibly easy and most important complete for app-building.

Now, I understand some people feel burned by Twitter, and don't want to risk building on its API, but nodeStorage takes a lot of the risk out of it. And I don't think today's Twitter is as concerned about app developers as the earlier version was.

Anyway, that's the story! If you're looking for an easy way to get started with the Twitter API and you can deploy a Node.js app, then nodeStorage is probably what you're looking for.

February 05, 2015 11:04 PM

February 04, 2015

Greg Linden

Quick links

What has caught my attention lately:
  • "Ads are often annoying ... [and] the practice of running annoying ads can cost more money than it earns" ([1] [2] [3])

  • Robot plays beer pong, but the real story is the clever bean bag robotic gripper using the "jamming phase transition of granular materials" ([1] [2] [3])

  • Good list of features a modern phone should have but does not ([1])

  • "At this point, Apple is basically an iPhone company with a few other side businesses ... The iPhone accounted for ... a staggering 69 percent ... of Apple's revenue." ([1])

  • "We were not building the phone for the customer — we were building it for Jeff [Bezos]" ([1] [2])

  • "One of the biggest problems in organizations is that the meeting is a tool that is used to diffuse responsibility" ([1] [2])

  • Pew poll on how opinions of US scientists differ from the US population, and public's perceptions of scientists ([1])

  • Pair a "brash, young scientist" with a "wiser, older scientist" to maximize innovation ([1] [2] [3])

  • Google Earth Pro is now free, lets you get high res stills and movies of anywhere on the planet ([1] [2])

  • People told a placebo was "expensive" had twice the improvement as measured by physical tests and brain scans ([1])

  • Blind men successfully train themselves to "see" using echolocation, and brain scans determine that they are using the otherwise unused visual centers of their brains to do so ([1] [2] [3] [4] [5])

  • Rather than modeling crowds with attraction and repulsion between agents, only avoiding anticipated collisions behaves closer to real humans ([1])

  • Xkcd comic: "I can't wait for the day when all my stupid computer knowledge is obsolete" ([1])

  • Xkcd What If: "Getting to space is easy. The problem is staying there." ([1])

by Greg Linden ( at February 04, 2015 06:29 PM

Lambda the Ultimate

A theory of changes for higher-order languages — incrementalizing λ-calculi by static differentiation

The project Incremental λ-Calculus is just starting (compared to more mature approaches like self-adjusting computation), with a first publication last year.

A theory of changes for higher-order languages — incrementalizing λ-calculi by static differentiation
Paolo Giarusso, Yufei Cai, Tillmann Rendel, and Klaus Ostermann. 2014

If the result of an expensive computation is invalidated by a small change to the input, the old result should be updated incrementally instead of reexecuting the whole computation. We incrementalize programs through their derivative. A derivative maps changes in the program’s input directly to changes in the program’s output, without reexecuting the original program. We present a program transformation taking programs to their derivatives, which is fully static and automatic, supports first-class functions, and produces derivatives amenable to standard optimization.

We prove the program transformation correct in Agda for a family of simply-typed λ-calculi, parameterized by base types and primitives. A precise interface specifies what is required to incrementalize the chosen primitives.

We investigate performance by a case study: We implement in Scala the program transformation, a plugin and improve performance of a nontrivial program by orders of magnitude.

I like the nice dependent types: a key idea of this work is that the "diffs" possible from a value v do not live in some common type diff(T), but rather in a value-dependent type diff(v). Intuitively, the empty list and a non-empty list have fairly different types of possible changes. This makes change-merging and change-producing operations total, and allow to give them a nice operational theory. Good design, through types.

(The program transformation seems related to the program-level parametricity transformation. Parametricity abstract over equality justifications, differentiation on small differences.)

February 04, 2015 10:00 AM

February 03, 2015

Dave Winer

I have time in SF today

There wasn't much response to this post, so I won't be doing the office hours this afternoon in the city. Thanks to those who did respond. I'm always happy to look at products created by people who read this site, so please feel free to send me links via email. Thanks!

I'm thinking of holding informal "office hours" at a coffee place south of market this afternoon. If you have an interest, we could talk about your development project (esp if it's JavaScript) or talk about open formats and protocols, or my various projects, or other tech stuff. Thinking mainly of Scripting News readers. And please no trolls. If you have an interest, send me an email --

February 03, 2015 04:54 PM

February 02, 2015

Blaine Buxton

Why Smalltalk is the productivity king

I've been thinking about why I'm so much more productive in Smalltalk than any other language. The reason is because I'm curious if you could bring some of it to other languages. So, what makes Smalltalk so special?

  • Incremental compilation. There is no cognitive drift. Compilation happens at the method level when one saves the code. It's automatically linked in and can be executed. Smalltalkers enjoy programming in the debugger and changing code in a running program. The concept of having to restart an entire application is foreign to a Smalltalker. The application is always alive and running. In other languages, you code while the application is not running. Programming a live application is an amazing experience. I'm shocked that it's hard to find a language that supports it. Java running the OSGi framework is the only example I can think of. But, one still has to compile a bundle (which is larger than a method).
  • Stored application state. Smalltalkers call it the image. At any point in time, you can save the entire state of the application even with a debugger open. I've saved my image at the end of the day so that I could be at that exact moment the next morning. I've also used it to share a problem that I'm having with another developer so they can see the exact state. It takes less than a second to bring up an image. It has the current running state and compiled code. One never spends time waiting for compiles or applications to start up.
  • Self contained. All of the source code is accessible inside the image. One can change any of it at any time. Every tool is written in Smalltalk. If one doesn't like how something works, one can change it. If one wants to add a feature, one is empowered to. Since everything is open, one can even change how the language works. It's programming without constraints. The original refactoring tools were written in Smalltalk.
  • Freedom from files. Allows Smalltalk to store the code in its own database. The shackles of the file system makes compilation and version control trickier. Smalltalk can incrementally index code to make searches quick and efficient. The structure of the code is not forced to fit into the file system mold.
Now, the question one has to ask is why don't we have these features in languages that we use now? Personally, I would love to be able to keep my application running and change the code as it runs. I just want to end productivity lost to application restarts and compilations.

by Blaine Buxton ( at February 02, 2015 09:21 PM

Dave Winer

To Seahawks fans re 12th Man

Seahawks fans -- remember the 12th man concept?

That means there is no "they."

Maybe it was a dumb call. It's a test. Do you own it or complain like a mere fan. If you're on board, there's always next year.

Spoken as a Mets/Knicks fan, with no illusions about what that means.

February 02, 2015 03:20 PM

Tim Ferriss



In this episode, I interview the one and only Arnold Schwarzenegger… at his kitchen table.

First off, he wants to invite you to LA to blow sh*t up with him in person. Seriously. Here’s how.

In our conversation, we dig into lessons learned, routines, favorite books, and much more, including many stories I’ve never heard anywhere else.  I’m also giving away amazing goodies for this episode, so be sure to read this entire post.

As a starting point, we cover:

  • The Art of Psychological Warfare, and How Arnold Uses It to Win
  • How Twins Became His Most Lucrative Movie (?!?)
  • Mailing Cow Balls to Politicians
  • How Arnold Made Millions — Fresh Off The Boat — BEFORE His Acting Career Took Off
  • How Arnold Used Meditation For One Year To Reset His Brain
  • And Much More…




1) A signed copy of Arnold’s autobiography, Total Recall, personalized for you by Arnold himself.

2) A roundtrip ticket anywhere in the world Continental flies, $1,000 USD cold hard cash, or a long dinner with me in SF (and a flight from anywhere in the domestic US).  Pick one of the three.

To get both 1 and 2, all you have to do is this:

1) Promote the hell out of this episode this week, driving clicks to the iTunes page (ideal) for my podcast, this direct streaming link, or this blog post. If helpful, the shorter link forwards to this blog post.

2) Leave a comment on this post telling me what you did (including anything quantifiable), no later than this Friday, Feb 6, at 6pm PT. Comments must be submitted by 6pm PT. It’s OK if they’re in moderation and don’t appear live before 6pm. Note: You must include #arnoldpod at the top of your comment to be considered! 

3) Within 7 days hence, I and my panel of magic elves will select the winner: he or she who describes in their comment how they drove the most downloads/listens.

4) That’s it! Remember: Deadline is 6pm PT this Friday, Feb 6.  No extensions.

5) Of course, void where prohibited, no purchase required, you must be over 21, no minotaurs, etc.


This episode is sponsored by OnnitI have used Onnit products for years. If you look in my kitchen or in my garage you will find Alpha BRAIN, chewable melatonin (for resetting my clock while traveling), kettlebells, maces, battle ropes, and steel clubs. It sounds like a torture chamber, and it basically is. A torture chamber for self-improvement! Ah, the lovely pain. To see a list of my favorite pills, potions, and heavy tools, click here.

This podcast is also brought to you by 99Designs, the world’s largest marketplace of graphic designers. Did you know I used 99Designs to rapid prototype the cover for The 4-Hour Body? Here are some of the impressive results.  Click this link and get a free $99 upgrade.  Give it a test run..

Scroll below for links and show notes…


Selected Links from the Episode

Sample People Mentioned


by Ian Robinson at February 02, 2015 10:20 AM

February 01, 2015

Giles Bowkett

Why Panda Strike Wrote the Fastest JSON Schema Validator for Node.js

Update: Another project is even faster!

After reading this post, you will know:

Because this is a very long blog post, I've followed the GitHub README convention of making every header a link.

Those who do not understand HTTP are doomed to re-implement it on top of itself

Not everybody understands HTTP correctly. For instance, consider the /chunked_upload endpoint in the Dropbox API:

Uploads large files to Dropbox in multiple chunks. Also has the ability to resume if the upload is interrupted. This allows for uploads larger than the /files_put maximum of 150 MB.

Since this is an alternative to /files_put, you might wonder what the deal is with /files_put.

Uploads a file using PUT semantics. Note that this call goes to instead of

The preferred HTTP method for this call is PUT. For compatibility with browser environments, the POST HTTP method is also recognized.

To be fair to Dropbox, "for compatibility with browser environments" refers to the fact that, of the people I previously mentioned - the ones who do not understand HTTP - many have day jobs where they implement the three major browsers. I think "for compatibility with browser environments" also refers to the related fact that the three major browsers often implement HTTP incorrectly. Over the past 20 years, many people have noticed that their lives would be less stressful if the people who implemented the major browsers understood the standards they were implementing.

Consider HTTP Basic Auth. It's good enough for the GitHub API. Tons of people are perfectly happy to use it on the back end. But nobody uses it on the front end, because browsers built a totally unnecessary restriction into the model - namely, a hideous and unprofessional user experience. Consequently, people have been manually rebuilding their own branded, styled, and usable equivalents to Basic Auth for almost every app, ever since the Web began.

By pushing authentication towards the front end and away from an otherwise perfectly viable aspect of the fundamental protocol, browser vendors encouraged PHP developers to handle cryptographic issues, and discouraged HTTP server developers from doing so. This was perhaps not the most responsible move they could have made. Also, the total dollar value of the effort expended to re-implement HTTP Basic Auth on top of HTTP, in countless instances, over the course of twenty years, is probably an immense amount of money.

Returning to Dropbox, consider this part here again:

Uploads large files to Dropbox in multiple chunks. Also has the ability to resume if the upload is interrupted. This allows for uploads larger than the /files_put maximum of 150 MB.

Compare that to the accept-range header, from the HTTP spec:

One use case for this header is a chunked upload. Your server tells you the acceptable range of bytes to send along, your client sends the appropriate range of bytes, and you thereby chunk your upload.

Dropbox decided to take exactly this approach, with the caveat that the Dropbox API communicates an acceptable range of bytes using a JSON payload instead of an HTTP header.

Typical usage:

  • Send a PUT request to /chunked_upload with the first chunk of the file without setting upload_id, and receive an upload_id in return.
  • Repeatedly PUT subsequent chunks using the upload_id to identify the upload in progress and an offset representing the number of bytes transferred so far.
  • After each chunk has been uploaded, the server returns a new offset representing the total amount transferred.
  • After the last chunk, POST to /commit_chunked_upload to complete the upload.

Google Maps does something similar with its API. It differs from the Dropbox approach in that, instead of an endpoint, it uses a CGI query parameter. But Google Maps went a little further than Dropbox here. They decided that ignoring a perfectly good HTTP header was not good enough, and instead went so far as to invent new HTTP headers which serve the exact same purpose:

To initiate a resumable upload, make a POST or PUT request to the method's /upload URI, including an uploadType=resumable parameter:


For this initiating request, the body is either empty or it contains the metadata only; you'll transfer the actual contents of the file you want to upload in subsequent requests.

Use the following HTTP headers with the initial request:

  • X-Upload-Content-Type. Set to the media MIME type of the upload data to be transferred in subsequent requests.
  • X-Upload-Content-Length. Set to the number of bytes of upload data to be transferred in subsequent requests. If the length is unknown at the time of this request, you can omit this header.
  • Content-Length. Set to the number of bytes provided in the body of this initial request. Not required if you are using chunked transfer encoding.

It's possible that the engineers at Google and Dropbox know some limitation of Accept-Ranges that I don't. They're great companies, of course. But it's also possible they just don't know what they're doing, and that's my assumption here. If you've ever been to Silicon Valley and met some of these people, you're probably already assuming the same thing. Hiring great engineers is very difficult, even for companies like Google and Dropbox. Netflix holds terrific scaling challenges and its engineers are still only human.

Anyway, combine this with the decades-running example of HTTP Basic Auth, and it becomes painfully obvious that those who do not understand HTTP are doomed to re-implement it on top of itself.

If you're a developer who understands HTTP, you've probably seen many similar examples already. If not, trust me: they're out there. And this widespread propagation of HTTP-illiterate APIs imposes unnecessary and expensive problems in scaling, maintenance, and technical debt.

One example: you should version with the Accept header, not your URI, because:

Tying your clients into a pre-set understanding of URIs tightly couples the client implementation to the server; in practice, this makes your interface fragile, because any change can inadvertently break things, and people tend to like to change URIs over time.

But this opens up some broader questions about APIs, so let's take a step back for a second.

APIs and JSON Schema

If you're working on a modern web app, with the usual message queues and microservices, you're working on a distributed system.

Not long ago, a company had a bug in their app, which was a modern web app, with the usual message queues and microservices. In other words, they had a bug in their distributed system. Attempts to debug the issue turned into meetings to figure out how to debug the issue. The meetings grew bigger and bigger, bringing in more and more developers, until somebody finally discovered that one microservice was passing invalid data to another microservice.

So a Panda Strike developer told this company about JSON Schema.

Distributed systems often use schemas to prevent small bugs in data transmission from metastasizing into paralyzing mysteries or dangerous security failures. The Rails and Rubygems YAML bugs of 2013 provide a particularly alarming example of how badly things can go wrong when a distributed system's input is not type-safe. Rails used an attr_accessible/attr_protected system for most of its existence - at least as early as 2005 - but switched to its new "strong parameters" system with the release of Rails 4 in 2013.

Here's some "strong parameters" code:

This line in particular stands out as an odd choice for a line of code in a controller:

params.require(:email).permit(:first_name, :last_name, :shoe_size)

With verbs like require and permit, this is basically a half-assed, bolted-on implementation of a schema. It's a document, written in Ruby for some insane reason, located in a controller file for some even more insane reason, which articulates what data's required, and what data's permitted. That's a schema. attr_accessible and attr_protected served a similar purpose more crudely - the one defining a whitelist, the other a blacklist.

In Rails 3, you defined your schema with attr_accessible, which lived in the model. In Rails 4, you use "strong parameters," which go in the controller. (In fact, I believe most Rails developers today define their schema in Ruby twice - via "strong parameters," for input, and via ActiveModel::Serializer, for output.) When you see people struggling to figure out where to shoehorn some functionality into their system, it usually means they haven't figured out what that functionality is.

But we know it's a schema. So we can make more educated decisions about where to put it. In my opinion, whether you're using Rails or any other technology, you should solve this problem by providing a schema for your API, using the JSON Schema standard. Don't put schema-based input-filtering in your controller or your model, because data which fails to conform to the schema should never even reach application code in the first place.

There's a good reason that schemas have been part of distributed systems for decades. A schema formalizes your API, making life much easier for your API consumers - which realistically includes not only all your client developers, but also you yourself, and all your company's developers as well.

JSON Schema is great for this. JSON Schema provides a thorough and extensible vocabulary for defining the data your API can use. With it, any developer can very easily determine if their data's legit, without first swamping your servers in useless requests. JSON Schema's on draft 4, and draft 5 is being discussed. From draft 3 onwards, there's an automated test suite which anyone can use to validate their validators; JSON Schema is in fact itself a JSON schema which complies with JSON Schema.

Here's a trivial JSON Schema schema in CSON, which is just CoffeeScript JSON:

One really astonishing benefit of JSON Schema is that it makes it possible to create libraries which auto-generate API clients from JSON Schema definitions. Panda Strike has one such library, called Patchboard, which we've had terrific results with, and which I hope to blog about in future. Heroku also has a similar technology, written in Ruby, although their documentation contains a funny error:

We’ve also seen interest in this toolchain from API developers outside of Heroku, for example [reference customer]. We’d love to see more external adoption of this toolkit and welcome discussion and feedback about it.

That's an actual quote. Typos aside, JSON Schema makes life easier for ops at scale, both in Panda Strike's experience, and apparently in Heroku's experience as well.

JSON Schema vs Joi's proprietary format

However, although JSON Schema's got an active developer and user community, Walmart Labs has also had significant results with their Joi project, which leverages the benefits of an API schema, but defines that schema in JavaScript rather than JSON. Here's an example:

As part of the Hapi framework, Joi apparently powered 2013 Black Friday traffic for Walmart very successfully.

Hapi was able to handle all of Walmart mobile Black Friday traffic with about 10 CPU cores and 28Gb RAM (of course we used more but they were sitting idle at 0.75% load most of the time). This is mind blowing traffic going through VERY little resources.

(The Joi developers haven't explicitly stated what year this was, but my guess is 2013, because this quote was available before Black Friday this past year. Likewise, we don't know exactly how many requests they're talking about here, but it's pretty reasonable to assume "mind-blowing traffic" means a lot of traffic. And it's pretty reasonable to assume they were happy with Joi on Black Friday 2014 as well.)

I love this success story because it validates the general strategy of schema validation with APIs. But at the same time, Joi's developers aren't fans of JSON Schema.

On json-schema - we don't like it. It is hard to read, write, and maintain. It also doesn't support some of the relationships joi supports. We have no intention of supporting it. However, hapi will soon allow you to use whatever you want.

At Panda Strike, we haven't really had these problems, and JSON Schema has a couple advantages that Joi's custom format lacks.

The most important advantage: multi-language support. JSON's universality is quickly making it the default data language for HTTP, which is the default data transport for more or less everything in the world built after 1995. Defining your API schema in JSON means you can consume and validate it in any language you wish.

It might even be fair to leave off the "JS" and call it ON Schema, because in practice, JSON Schema validators will often allow you to pass them an object in their native languages. Here's a Ruby example:

This was not JSON; this was Ruby. In this example, you still have to use strings, but it'd be easy to circumvent that, in the classic Rails way, with the ActiveSupport library. Similar Python examples also exist. If you've built something with Python and JSON Schema, and you decide to rebuild in Ruby, you won't have to port the schema.

Crazy example, but it's equally true for Clojure, Go, or Node.js. And it's not at all difficult to imagine that a company might port services from Python or Ruby to Clojure, Go, or Node, especially if speed's essential for those services. At a certain point in a project's lifecycle, it's actually quite common to isolate some very specific piece of your system for a performance boost, and to rewrite some important slice of your app as a microservice, with a new focus on speed and scalability. Because of this, it makes a lot of sense to decouple an API's schema from the implementation language for any particular service which uses the API.

JSON Schema's universality makes it portable in a way that Joi's pure JavaScript schemas cannot achieve. (This is also true for the half-implemented pure-Ruby schemas buried inside Rails's "strong parameters" system.)

Another fun use case for JSON Schema: describing valid config files for any service written in any language. This might be annoying for those of you who prefer writing your config files in Ruby, or Clojure, or whatever language you prefer, but it has a lot of practical utility. The most obvious argument for JSON Schema is that it's a standard, which has a lot of inherent benefits, but the free bonus prize is that it's built on top of an essentially universal data description language.

And one final quibble with Joi: it throws some random, miscellaneous text munging into the mix, which doesn't make perfect sense as part of a schema validation and definition library.

JSCK: Fast as fuck

If it seems like I'm picking on Joi, there's a reason. Panda Strike's written a very fast JSON Schema validator, and in terms of performance, Joi is its only serious competitor.

Discussing a blog post on which benchmarked JSON Schema validators and found Joi to be too slow, a member of the Joi community said this:

Joi is actually a lot faster, from what I can tell, than any json schema validator. I question the above blog's benchmark and wonder if they were creating the joi schema as part of the iteration (which would be slower than creating it as setup).

The benchmark in question did make exactly that mistake in the case of JSV, one of the earliest JSON Schema validators for Node.js. I know this because Panda Strike built another of the very earliest JSON Schema validators for Node. It's called JSCK, and we've been benchmarking JSCK against every other Node.js JSON Schema validator we can find. Not only is it easily the fastest option available, in some cases it is faster by multiple orders of magnitude.

We initially thought that JSV was one of these cases, but we double-checked to be sure, and it turns out that the JSV README encourages the mistake of re-creating the schema on every iteration, as opposed to only during setup. We had thought JSCK was about 10,000 times faster than JSV, but when we corrected for this, we found that JSCK was only about 100 times faster.

(I filed a pull request to make the JSV README clearer, to prevent similar misunderstandings, but the project appears to be abandoned.)

So, indeed, the Cosmic Realms benchmarks do under-represent JSV's speed in this way, which means it's possible they under-represent Joi's speed in the same way also. I'm not actually sure. I hope to investigate in future, and I go into some relevant numbers further down in this blog post.

However, this statement seems very unlikely to me:

Joi is actually a lot faster, from what I can tell, than any json schema validator.

It is not impossible that Joi might turn out to be a few fractions of a millisecond faster than JSCK, under certain conditions, but Joi is almost definitely not "a lot faster" than JSCK.

Let's look at this in more detail.

JSCK benchmarks

The Cosmic Realms benchmarks use a trivial example schema; our benchmarks for JSCK use a trivial schema too, but we also use a more medium-complexity schema, and a very complex schema with nesting and other subtleties. We used a multi-schema benchmarking strategy to make the data more meaningful.

I'm going to show you these benchmarks, but first, here's the short version: JSCK is the fastest JSON Schema validator for Node.js - for both draft 3 and draft 4 of the spec, and for all three levels of complexity that I just mentioned.

Here's the long version. It's a matrix of libraries and schemas. We present the maximum, minimum, and median number of validations per second, for each library, against each schema, with the schemas organized by their complexity and JSON Schema draft. We also calculate the relative speed of each library, which basically means how many times slower than JSCK a given library is. For instance, in the chart below, json-gate is 3.4x to 3.9x slower than JSCK.

The jayschema results are an outlier, but JSCK is basically faster than anything.

When Panda Strike first created JSCK, few other JSON Schema valdiation libraries existed for Node.js. Since there are so many new alternatives, it's pretty exciting to see that JSCK remains the fastest option.

However, if you're also considering Joi, my best guess is that, for trivial schemas, Joi is about the same speed as JSCK, which is obviously pretty damn fast. I can't currently say anything about its relative performance on complex schemas, but I can say that much.

Here's why. There's a project called enjoi which automatically converts trivial JSON Schemas to Joi's format. It ships with benchmarks against tv4. The benchmarks run a trivial schema, and this is how they look on my box:

tv4 vs joi benchmark:

tv4: 22732 operations/second. (0.0439918ms)
joi: 48115 operations/second. (0.0207834ms)

For a trivial draft 4 schema, Joi is more than twice as fast as tv4. Our benchmarks show that for trivial draft 4 schemas, JSCK is also more than twice as fast as tv4. So, until I've done further investigation, I'm happy to say they look to be roughly the same speed.

However, JSCK's speed advantage over tv4 increases to 5x with a more complex schema. As far as I can tell, nobody's done the work to translate a complex JSON Schema into Joi's format and benchmark the results. So there's no conclusive answer yet for the question of how Joi's speed holds up against greater complexity.

Also, of course, these specific results are dependent on the implementation details of enjoi's schema translation, and if you make any comparison between Joi and a JSON Schema validator, you should remember there's an apples-to-oranges factor.

Nonetheless, JSCK is very easily the fastest JSON Schema validator for Node.js, and although Joi might be able to keep up in terms of performance, a) it might not, and b) either way, its format locks you into a specific language, whereas JSON Schema gives you wide portability and an extraordinary diversity of options.

We are therefore very proud to recommend that you use JSCK if you want fast JSON Schema validation in Node.js.

I'm doing a presentation about JSCK at ForwardJS in early February. Check it out if you're in San Francisco.

by Giles Bowkett ( at February 01, 2015 04:33 PM

January 30, 2015

Giles Bowkett

ForwardJS Next Week: JSCK Talk, Free Tickets

I'll be giving a talk on JSCK at ForwardJS next week. And I have two free tickets to give away — first come, first served.

I'm looking forward to this conf a lot, mostly because there's a class on functional programming in JavaScript. I know there are skeptics, but I recently read Fogus's book about that and it was pretty great.

by Giles Bowkett ( at January 30, 2015 10:50 AM

January 29, 2015

Ian Bicking

Encouraging Positive Engagement

In my last post on management I talked about a Manager Tools series, and summarized it as:

The message in these podcasts is: it is your responsibility as a manager to support the company’s decisions. Not just to execute on them, but to support them, to communicate that support, and if you disagree then you must hide that disagreement in the service of the company. You can disagree up — though even that is fraught with danger — but you can’t disagree down. You must hold yourself apart from your team, putting a wall between you and your team. To your team you are the company, not a peer.

I’m not endorsing that approach, but I’m also not sure they are wrong. In the comments on the post and on Hacker News that idea got a lot of pushback, including from people who followed up and listened to the original podcasts. Listening to those podcasts made me feel very uncomfortable, and I wrote that post immediately after listening to the podcasts.

I shared a particular instance where I felt I had to apply this principle. But thinking about this more, and talking about it with my reports, I have a better feeling for how I want to approach this question.

I think the “always be honest” approach that was widely advocated is terribly simplistic. Honesty doesn’t mean saying “hi, how are you doing? That shirt is incredibly ugly.” You might have thoughts, but it isn’t dishonest to hold your tongue. Each of us already consider what we say and how we say it. As a manager, and in a position of leadership, your words have greater impact. It is wise to put in a bit more consideration, especially around certain topics.

That said, I don’t think I need to agree with every choice that the company makes. I don’t have to offer up disagreement, but I do get asked, and should answer honestly. It is my responsibility to help my reports engage positively with the larger institution. That’s a constant: even if everything is totally fucked up, it’s still the right thing to engage positively with circumstances. Otherwise you should leave. But that’s ultimatum-talk, most of the impact is in the margins: engaging more positively in all your actions.

In my position I can sabotage this engagement. What I might see as simple “disagreement” has the potential to undermine whatever good may come out of a decision, and so I have to be careful. For instance, it’s easy in disagreement to telegraph (even unintentionally) a belief that a policy should be ignored, or that feet-dragging is politically advantageous for the team, or that the team should sandbag.

So what if something happens that I really disagree with? Until I’ve thought it through I should probably keep my mouth shut. This requires a degree of humility (first, heal thyself). I have to figure out how I can engage positively with these new circumstances. This might be a lonely exercise, sandwiched above by a decision I disagree with and below by reports I must withhold myself from. But I have to work through this – people treat opinions as though they are immutable, as though it is dishonest or even duplicitous if you do not stick with your first reaction. There is an arrogance in this (of course in management you also have to cultivate sufficient arrogance to tell people what to do). And so it is a real challenge to find the humility to genuinely change your mind about something, or change your perspective. But I don’t think a manager has to completely align themselves with company decisions, they don’t have to paste a smile on and say “everything is great!” The manager has to do good work in a new situation, and that means helping your reports do good work. Pasted on smiles are superfluous.

by Ian Bicking at January 29, 2015 06:00 AM

January 28, 2015

Reinventing Business

Slide-Deck Summary of "Reinventing Organizations"

Here's a very nice overview of the ideas from Frederic Laloux's Reinventing Organizations. I'm about one third of the way through the book and am savoring it. It's been rewiring my brain, and now it's even harder to watch traditional industrial-age organizations in action.

by (Bruce Eckel) at January 28, 2015 05:42 PM

January 27, 2015

Giles Bowkett

Superhero Comics About Modern Tech

There are two great superhero comics which explore social media and NSA surveillance in interesting ways.

I really think you should read these comics, if you're a programmer. Programming gives you incredible power to shape the way the world is changing, as code takes over nearly everything. But both the culture around programming, and the educations which typically shape programmers' perspectives, emphasize technical details at the expense of subjects like ethics, history, anthropology, and psychology, leading to incredibly obvious and idiotic mistakes with terrible consequences. With great power comes great responsibility, but at Google and Facebook, with great free snacks come great opportunities for utterly unnecessary douchebaggery.

A lot of people in the music industry talk about Google as evil. I don’t think they are evil. I think they, like other tech companies, are just idealistic in a way that works best for them... The people who work at Google, Facebook, etc can’t imagine how everything they make is not, like, totally awesome. If it’s not awesome for you it’s because you just don’t understand it yet and you’ll come around. They can’t imagine scenarios outside their reality and that is how they inadvertently unleash things like the algorithmic cruelty of Facebook’s yearly review (which showed me a picture I had posted after a doctor told me my husband had 6-8 weeks to live).

Fiction exists to explore issues like these, and in particular, fantastical fiction like sci-fi and superhero comics is extremely useful for exploring the impact of new technologies on a society. This is one of the major reasons fiction exists and has value, and these comics are doing an important job very effectively. (There's sci-fi to recommend here as well, but a lot of the people who were writing sci-fi about these topics seem to have almost given up.)

So here these comics are.

In 2013 and 2014, Peter Parker was dead. (Not really dead, just superhero dead.) The megalomaniac genius Otto Octavius, aka Dr. Octopus, was on the verge of dying from terminal injuries racked up during his career as a supervillian. So he tricked Peter Parker into swapping bodies with him, so that Parker died in Octavius's body and Octavius lived on inside Parker's. But in so doing, he acquired all of Parker's memories, and saw why Parker dedicated his life to being a hero. Octavius then chose to follow his example, but to do so with greater competence and intelligence, becoming the Superior Spider-Man.

The resulting comic book series was amazing. It's some of the best stuff I've ever seen in a whole lifetime reading comics.

Given that his competence and intelligence were indeed both superior, Octavius did actually do a much better job of being Spider-Man than Spider-Man himself had ever done, in some respects. (Likewise, as Peter Parker, he swiftly obtained a doctorate, launched a successful tech startup, and turned Parker's messy love life into something much simpler and healthier.) But given that Octavius was a megalomaniac asshole with no reservations about murdering people, he did a much worse job, in other respects.

In the comics, the Superior Spider-Man assassinates prominent criminals, blankets the entire city in a surveillance network comprised of creepy little eight-legged camera robots, taps into every communications network in all of New York, and uses giant robots to completely flatten a crime-ridden area, killing every inhabitant. (He also rants constantly, in hilariously overblown terms, like the verbose and condescending supervillian who he was for his entire previous lifetime.)

Along the way, Octavius meets "supervillians" who are merely pranksters -- kids who hit the mayor with a cream pie so they can tweet about it -- and he nearly kills them.

As every superhero does, of course, Peter Parker eventually comes back from the dead and saves the day. But during the course of the series' nearly two-year run, The Superior Spider-Man did an absolutely amazing job of illustrating how terrible it can be for a city to have a protector with incredible power and no ethical boundaries. Anybody who works for the NSA should read these comics before quitting their terrible careers in shame.

DC Comics, meanwhile, has rebooted Batgirl and made social media a major element of her life.

I only just started reading this series, and it's a fairly new reboot, but as far as I can tell, these new Batgirl comics are comics about social media which just happen to feature a superhero (or superheroine?) as their protagonist. She promotes her own activities on an Instagram-like site, uses it to track down criminals, faces impostors trying to leverage her fame for their own ends, and meets dates in her civilian life as Barbara Gordon through Hooq, a fictional Tinder equivalent.

The most important difference between these two series is that one ran for two years and is now over, while the other is just getting started. But here's my impression so far. Where Superior Spider-Man tackled robotics, ubiquitous surveillance, and an unethical "guardian" of law and order, Batgirl seems to be about the weird cultural changes that social media are creating.

Peter Parker's a photographer who works for a newspaper, and Clark Kent's a reporter, but this is their legacy as cultural icons created many decades ago. Nobody thinks of journalism as a logical career for a hero any more. Batgirl's a hipster in her civilian life, she beats up douchebag DJs, and I think she might work at a tech startup, but maybe she's in grad school. There's a fun contrast here; while the Superior Spider-Man's alter ego "Peter Parker," really Otto Octavius, basically represents the conflict between how Google and the NSA see themselves vs. how they look to everyone else — super genius vs. supervillain — Barbara Gordon looks a lot more like real life, or at least, the real life of people outside of that nightmarish power complex.

by Giles Bowkett ( at January 27, 2015 06:02 PM

Ian Bicking

A Product Journal: To MVP Or Not To MVP

I’m going to try to journal the process of a new product that I’m developing in Mozilla Cloud Services. My previous post was The Tech Demo, and the first in the series is Conception.

The Minimal Viable Product

The Minimal Viable Product is a popular product development approach at Mozilla, and judging from Hacker News it is popular everywhere (but that is a wildly inaccurate way to judge common practice).

The idea is that you build the smallest thing that could be useful, and you ship it. The idea isn’t to make a great product, but to make something so you can learn in the field. A couple definitions:

The Minimum Viable Product (MVP) is a key lean startup concept popularized by Eric Ries. The basic idea is to maximize validated learning for the least amount of effort. After all, why waste effort building out a product without first testing if it’s worth it.

– from How I built my Minimum Viable Product (emphasis in original)

I like this phrase “validated learning.” Another definition:

A core component of Lean Startup methodology is the build-measure-learn feedback loop. The first step is figuring out the problem that needs to be solved and then developing a minimum viable product (MVP) to begin the process of learning as quickly as possible. Once the MVP is established, a startup can work on tuning the engine. This will involve measurement and learning and must include actionable metrics that can demonstrate cause and effect question.

– Lean Startup Methodology (emphasis added)

I don’t like this model at all: “once the MVP is established, a startup can work on tuning the engine.” You tune something that works the way you want it to, but isn’t powerful or efficient or fast enough. You’ve established almost nothing when you’ve created an MVP, no aspect of the product is validated, it would be premature to tune. But I see this antipattern happen frequently: get an MVP out quickly, often shutting down critically engaged deliberation in order to Just Get It Shipped, then use that product as the model for further incremental improvements. Just Get It Shipped is okay, incrementally improving products is okay, but together they are boring and uncreative.

There’s another broad discussion to be had another time about how to enable positive and constructive critical engagement around a project. It’s not easy, but that’s where learning happens, and the purpose of the MVP is to learn, not to produce. In contrast I find myself impressed by the shear willfulness of the Halflife development process which apparently involved months of six hour design meetings, four days a week, producing large and detailed design documents. Maybe I’m impressed because it sounds so exhausting, a feat of endurance. And perhaps it implies that waterfall can work if you invest in it properly.

Plan plan plan

I have a certain respect for this development pattern that Dijkstra describes:

Q: In practice it often appears that pressures of production reward clever programming over good programming: how are we progressing in making the case that good programming is also cost effective?

A: Well, it has been said over and over again that the tremendous cost of programming is caused by the fact that it is done by cheap labor, which makes it very expensive, and secondly that people rush into coding. One of the things people learn in colleges nowadays is to think first; that makes the development more cost effective. I know of at least one software house in France, and there may be more because this story is already a number of years old, where it is a firm rule of the house, that for whatever software they are committed to deliver, coding is not allowed to start before seventy percent of the scheduled time has elapsed. So if after nine months a project team reports to their boss that they want to start coding, he will ask: “Are you sure there is nothing else to do?” If they say yes, they will be told that the product will ship in three months. That company is highly successful.

– from Interview Prof. Dr. Edsger W. Dijkstra, Austin, 04–03–1985

Or, a warning from a page full of these kind of quotes: “Weeks of programming can save you hours of planning.” The planning process Dijkstra describes is intriguing, it says something like: if you spend two weeks making a plan for how you’ll complete a project in two weeks then it is an appropriate investment to spend another week of planning to save half a week of programming. Or, if you spend a month planning for a month of programming, then you haven’t invested enough in planning to justify that programming work – to ensure the quality, to plan the order of approach, to understand the pieces that fit together, to ensure the foundation is correct, ensure the staffing is appropriate, and so on.

I believe “Waterfall Design” gets much of its negative connotation from a lack of good design. A Waterfall process requires the design to be very very good. With Waterfall the design is too important to leave it to the experts, to let the architect arrange technical components, the program manager to arrange schedules, the database architect to design the storage, and so on. It’s anti-collaborative, disengaged. It relies on intuition and common sense, and those are not powerful enough. I’ll quote Dijkstra again:

The usual way in which we plan today for tomorrow is in yesterday’s vocabulary. We do so, because we try to get away with the concepts we are familiar with and that have acquired their meanings in our past experience. Of course, the words and the concepts don’t quite fit because our future differs from our past, but then we stretch them a little bit. Linguists are quite familiar with the phenomenon that the meanings of words evolve over time, but also know that this is a slow and gradual process.

It is the most common way of trying to cope with novelty: by means of metaphors and analogies we try to link the new to the old, the novel to the familiar. Under sufficiently slow and gradual change, it works reasonably well; in the case of a sharp discontinuity, however, the method breaks down: though we may glorify it with the name “common sense”, our past experience is no longer relevant, the analogies become too shallow, and the metaphors become more misleading than illuminating. This is the situation that is characteristic for the “radical” novelty.

Coping with radical novelty requires an orthogonal method. One must consider one’s own past, the experiences collected, and the habits formed in it as an unfortunate accident of history, and one has to approach the radical novelty with a blank mind, consciously refusing to try to link it with what is already familiar, because the familiar is hopelessly inadequate. One has, with initially a kind of split personality, to come to grips with a radical novelty as a dissociated topic in its own right. Coming to grips with a radical novelty amounts to creating and learning a new foreign language that can not be translated into one’s mother tongue. (Any one who has learned quantum mechanics knows what I am talking about.) Needless to say, adjusting to radical novelties is not a very popular activity, for it requires hard work. For the same reason, the radical novelties themselves are unwelcome.

– from EWD 1036, On the cruelty of really teaching computing science


All this praise of planning implies you know what you are trying to make. Unlikely!

Coding can be a form of planning. You can’t research how interactions feel without having an actual interaction to look at. You can’t figure out how feasible some techniques are without trying them. Planning without collaborative creativity is dull, planning without research is just documenting someone’s intuition.

The danger is that when you are planning with code, it feels like execution. You can plan to throw one away to put yourself in the right state of mind, but I think it is better to simply be clear and transparent about why you are writing the code you are writing. Transparent because the danger isn’t just that you confuse your coding with execution, but that anyone else is likely to confuse the two as well.

So code up a storm to learn, code up something usable so people will use it and then you can learn from that too.

My own conclusion…

I’m not making an MVP. I’m not going to make a maximum viable product either – rather, the next step in the project is not to make a viable product. The next stage is research and learning. Code is going to be part of that. Dogfooding will be part of it too, because I believe that’s important for learning. I fear thinking in terms of “MVP” would let us lose sight of the why behind this iteration – it is a dangerous abstraction during a period of product definition.

Also, if you’ve gotten this far, you’ll see I’m not creating minimal viable blog posts. Sorry about that.

by Ian Bicking at January 27, 2015 06:00 AM

January 25, 2015


MrDoob Approves – A Javascript CodeStyle Editor+Validator+Formatter Project

Near the close of the year 2014, I had an idea (while in the shower): write a little webpage which gives you the answer to “does mrdoob approve your code style”.

Screenshot 2015-01-13 09.28.24

Often times three.js gets decent pull requests that requires a little more formatting to match the project’s code style. Myself would have been found many times guilty not adhering to code style, but in the past when there were no guidelines on what it was, mrdoob and alteredq who would reformat the code their own.

Today we have slightly better documentation on contributing and code style but code style offences still happen pretty regularly I guess.

Therefore the idea was to simply use a browserfied build of node-jscs together with Mr.doob’s Code Style™(MDCS) preset (developed earlier in the year), and make it super accessible on a website. One thought was to buy a domain name like, along the style of some “questionable” domains eg.

  1. (html5 features in browser)
  2. (popular webservices)
  3. (almost anything)

So that’s how name and github project “mrdoobapproves” came about and I tweeted about it shortly.

But I thought there were room for improvements from this initial idea, so I created a github repository and applied some “Open open source” approach I learn’t from Mikeal Rogers.

Gero3 who is another awesome three.js contributor (who had previously contributed the mdcs preset to node-jscs) hopped on board and in just a couple of days (over the new year especially), we have merged 20 pull requests adding more awesome features like autoformatting and releasing version 1.0 –

So I guess that’s the long story short. I could possible talk more about the history, but if you’re really interested you could piece things together reading,, and I could also go into why people would love or hate Mr.doob’s Code Style™, but for now I would say if you read too much dense code, trying out MCDS might be a fresh change for you.

There is also more to talk about the implementation of this project, but for now I’ll just say it is built with codemirror and node-jscs (which uses esprima) which are really really awesome libraries. There are also slight codemirror plugin additions and auto-formatting is based on gero’s branch of node-jscs since auto-formatting is coming to node-jscs.

For those who are interesting in js code formatting, be sure to check out

  • jsbeautifier – Almost defacto online JavaScript beautifier
  • JSNice – Statistical renaming, Type inference and Deobfuscation
  • jsfmt – another tool for renaming + reformating javascript using esprima and esformatter.

In conclusion, I think this has been a really interesting project to me and great thanks to Gero3 who has been a great help. I think this is also an example that when a topic (code styling) is usually be contentious among programmers, rather than complaining or debating too much, it is much useful to build tools to fix things and focus on things which matter.

So what’s next? Its nice to see some usage of this tool on three.js and perhaps one additional improvement is to hook up three.js with travis for code style checking.
Also if anyone’s interested to improve this tool, check out the enhancement list and feel create new issues and pull requests.

Finally, in case anyone missed the links, checkout

1. The demo
2. Github project
3. The 1.0 release

by Zz85 at January 25, 2015 02:44 PM

Decyphering Glyph

Security as Stencil

Image Credit: Horia Varlan

On the Internet, it’s important to secure all of your communications.

There are a number of applications which purport to give you “secure chat”, “secure email”, or “secure phone calls”.

The problem with these applications is that they advertise their presence. Since “insecure chat”, “insecure email” and “insecure phone calls” all have a particular, detectable signature, an interested observer may easily detect your supposedly “secure” communication. Not only that, but the places that you go to obtain them are suspicious in their own right. In order to visit Whisper Systems, you have to be looking for “secure” communications.

This allows the adversary to use “security” technologies such as encryption as a sort of stencil, to outline and highlight the communication that they really want to be capturing. In the case of the NSA, this dumps anyone who would like to have a serious private conversation with a friend into the same bucket, from the perspective of the authorities, as a conspiracy of psychopaths trying to commit mass murder.

The Snowden documents already demonstrate that the NSA does exactly this; if you send a normal email, they will probably lose interest and ignore it after a little while, whereas if you send a “secure” email, they will store it forever and keep trying to crack it to see what you’re hiding.

If you’re running a supposedly innocuous online service or writing a supposedly harmless application, the hassle associated with setting up TLS certificates and encryption keys may seem like a pointless distraction. It isn’t.

For one thing, if you have anywhere that user-created content enters your service, you don’t know what they are going to be using it to communicate. Maybe you’re just writing an online game but users will use your game for something as personal as courtship. Can we agree that the state security services shouldn’t be involved in that?. Even if you were specifically writing an app for dating, you might not anticipate that the police will use it to show up and arrest your users so that they will be savagely beaten in jail.

The technology problems that “secure” services are working on are all important. But we can’t simply develop a good “secure” technology, consider it a niche product, and leave it at that. Those of us who are software development professionals need to build security into every product, because users expect it. Users expect it because we are, in a million implicit ways, telling them that they have it. If we put a “share with your friend!” button into a user interface, that’s a claim: we’re claiming that the data the user indicates is being shared only with their friend. Would we want to put in a button that says “share with your friend, and with us, and with the state security apparatus, and with any criminal who can break in and steal our database!”? Obviously not. So let’s stop making the “share with your friend!” button actually do that.

Those of us who understand the importance of security and are in the business of creating secure software must, therefore, take on the Sisyphean task of not only creating good security, but of competing with the insecure software on its own turf, so that people actually use it. “Slightly worse to use than your regular email program, but secure” is not good enough. (Not to mention the fact that existing security solutions are more than “slightly” worse to use). Secure stuff has to be as good as or better than its insecure competitors.

I know that this is a monumental undertaking. I have personally tried and failed to do something like this more than once. As the Rabbi Tarfon put it, though:

It is not incumbent upon you to complete the work, but neither are you at liberty to desist from it.

by Glyph at January 25, 2015 12:16 AM

January 23, 2015

Blue Sky on Mars

The Correct Floor Plan For Your Startup

There’s been a lot of discussion lately about whether particular office floor plans are detrimental to your startup’s success.

Since I moonlight as an Investigative Journalist, I decided to analyze three popular approaches that you can take:

Open Floor Plan

An “open floor plan office” derives from the Polish phrase meaning, “fuck the employees let’s shove them all in a pile and let them sort it out”. This plan, popularized by Henry Ford, became pivotal in the 1950’s to America’s continued technological advantage over the Soviet Union due to the USSR’s overwhelming surplus of walls. No matter what approach the Politburo tried, the only way they could feasibly deal with this surplus was to create fields of cubicles, leading to the infamous open floor plan gap that would dictate foreign policy for decades to come.

In an open floor plan, teams of developers tend to be regurgitated onto endless miles of tables, which are pushed together so that management can easily corral them into meetings (but not close enough such that developers unionize).

To defend themselves, the immune systems of these developers naturally evolved extensions of their earlobes into growths that we now call “headphones”. This led many developers to claim that techno music “really helped them concentrate and code”, which is a fate easier to swallow than sitting and talking about something called “360 degree feedback” in a meeting room with your manager.

The open floor plan grew in fame in recent years because, according to Peter Thiel, “everyone wanted to be like that one guy in The Social Network trailer and yell MARRRRRRK! and throw shit on Mark Zuckerberg’s desk”.

Closed Floor Plan

The “closed floor plan office” came directly from those misled years spent in those Soviet grain farms which grew walls that formed the basis of their cubicle society from 1952-1991.

Though detrimental towards Team Bonding, cubicles eventually were adopted in the American workplace by Enron executives in 1994, who sought a more potent way to demoralize its workplace through the use of emotionally-crippling loneliness and isolation. After a few months of this approach, Enron’s handlers found they didn’t even need to lock the doors to cubicles anymore; employees would naturally sit at their desk until the end of business at 10PM. (Though oft-attempted, the economical and lucrative bathroom/cubicle combination was found to be unfeasible until new laws were passed by Congress in 2008 through rider on an extension of the Patriot Act.)

Still, closed floor plans are generally frowned upon by startups primarily due to the death of a prominent FORTRAN programmer who had a heart attack in his private office while drinking his eighteenth Mountain Dew™ that day during the first dotcom bubble. His body wasn’t discovered until the second bubble.

The Remote Office

A relatively late contender, the remote office became wildly popular with the introduction of Skype calling, which let you listen in with crystal-clear quality on the exact sound of the splash made while your coworker was on the toilet discussing thin client strategies during your 9am standup (well, sit-down).

The remote office would have similar soul-crushing loneliness of the closed floor plan office with the exception that now it is no longer necessary to wear pants. With that single achievement, remote work has been scientifically proven to be accessible, approachable, and advantageous to millions of remote workers who happen to be found in the same timezone as the main office.

Luckily, there’s always one asshole who is literally across the entire world and one of you always has to wake up way fucking early in the morning to talk to them and oh just kidding it’s always going to be you because the other dude is “a little hungover and still rolling a little from this crazy paris hilton dj set last night” to do it in his morning today so could you just wake up in eight hours thanks i’d really appreciate it!


So, as you can see, all of these office structures have considerable benefits and drawbacks.

After studying the problem for awhile, I’ve come up with two possible solutions.

Abolish work. I’m not sure it makes sense to do work anymore, and I’m starting to think that people are happier just like, not doing it.

Use a mixture. Maybe — just maybe — the long, passionate discussion and debate about this is indicative that different people have different tastes. Some people dig open offices, some hate them, and some hate offices of all kinds (or have existing obligations at home that preclude them from going into an office every day). So it seems to me that a good approach might be to have open spaces, closed spaces, and a healthy remote environment, and let people choose. People’s tastes change over time, after all, and some days they might want isolation, some days they might not. A little flexibility goes a long way.

That said, the real best-of-all-possible-worlds option is, of course, to have one gigantic bathroom with beds in it so that people never have to leave the workplace. Traditionally it’s appropriate to buy employees an expensive watch after a few decades at the company, but I’ve always found it more prudent to give them TWO watches (one for each wrist!) during their on-boarding so that you can attach the chains directly to their new stainless steel watches so that you don’t have people accidentally leaving the company before their 30 years are up. Not sure why people haven’t adopted that practice yet.

January 23, 2015 12:00 AM

January 22, 2015

Ian Bicking

A Product Journal: The Technology Demo

I’m going to try to journal the process of a new product that I’m developing in Mozilla Cloud Services. My previous and first post was Conception.

As I finished my last post I had a product idea built around a strategy (growth through social tools and sharing) and a technology (freezing or copying the markup). But that’s not a concise product definition centered around user value. It’s not even trying. The result is a technology demo, not a product.

In my defense I’m searching for some product, I don’t know what it is, and I don’t know if it exists. I have to push this past a technology demo, but if I have to start with a technology demo then so it goes.

I’ve found a couple specific experiences that help me adapt the product:

  • I demo the product and I sense an excitement for something I didn’t expect. For example, a view that I thought was just a logical necessity might be what most appeals to someone else. To do this I have to show the tool to people, and it has to include things that I think are somewhat superfluous. And I have to be actively reading the person viewing the demo to sense their excitement.

  • Remind myself continuously of the strategy. It also helps when I remind other people, even if they don’t need reminding – it centers the discussion and my thinking around the goal. In this case there’s a lot of personal productivity use cases for the technology, and it’s easy to drift in that direction. It’s easy because the technology facilitates those use cases. And while it’s cool to make something widely useful, that won’t make this tool work the way I want as a product, or work for Mozilla. (And because I plan to build this on Mozilla’s dime it better work for Mozilla! But that’s a discussion for another post.)

  • I’ll poorly paraphrase something I’m sure someone can source in the comments: a product that people love is one that makes those people feel great about themselves. In this case, makes them feel like a journalist and not just a crank, or makes them feel like they are successfully posing as a professional, or makes them feel like what they are doing is appreciated by other people, or makes them feel like an efficient organizer. In the product design you can exult the product, try to impress people, try to attract compliments on your own prowess, but love comes when a person is impressed with themselves when they use your product. This advice helps keep me from valuing cleverness.

A common way to pull people out of technology-focused thinking is to ask “what problem does this solve?” While I appreciate this question more than I used to, it still makes me bristle. Why must everything be focused on problems? Why not opportunities! Why? An answer: problems are cases where a person has already articulated a tension and an openness to resolution. You have a customer in waiting. But must we confine ourselves to the partially formed conventional wisdom that makes something a “problem”? (One fair answer to this question is: yes. I remain open to other answers.) Maybe a more positive alternative to “what problem does this solve?” is “what does this let people do that they couldn’t do before?”

What I’m certain of is that you should constantly remember the people using your tool will care most about their interests, goals, and perspective; and will not care much about the interests, goals, or perspective of the tool maker.

So what should this tool do? If not technology, what defines it? A pithy byline might be share better. I don’t like pithy, but maybe a whole bag of pithy:

  • Improving on the URL
  • Own what you share
  • Share content, not pointers
  • Share what you see, anything you see
  • Every share is a message, make it your message
    Dammit, why do I feel compelled to noun “share”?
  • Share the context, the journey, not just the web destination
  • Own your perspective, don’t give it over to site owners
  • Know how and when people see what you share
  • Build better content, even if the publisher doesn’t
  • Trade in content, not promises for content
  • Copy/enhance/share

No… quantity doesn’t equal quantity I suppose. Another attempt:

When you share, you are a publisher. Your medium is the IM text input, or the Facebook status update, or the email composition window. It seems casual, it seems pithy, but that individual publishing is what the web is built on. I respect everyone as a publisher, every medium as worthy of improvement, and this project will respect your efforts. We will try to make a tool that can make every instance just a little bit better, simple when all you need is simple, polished if you want. We will defer your decisions because you should decide in context, not make decisions in the order that makes our work easier; we will be transparent to you, your audience, and your source; respect for the reader is part of our brand promise, and that adds to the quality of your shares; we believe content is a message, a relationship between you and your audience, and there is no universally appropriate representation; we believe there is order and structure in information, but only when that information is put to use; we believe our beliefs are always provisional and tomorrow it is our prerogative to rebelieve whatever we want most.

Who is we? Just me. A pretentiously royal we. It can’t stay that way for long though. More on that soon…

[The next post in this series is To MVP Or Not To MVP]

by Ian Bicking at January 22, 2015 06:00 AM

Blue Sky on Mars

Don't Break the Streak Maybe

The most important philosopher of our time, Jerry Seinfeld, has a major tip about productivity:

He told me to get a big wall calendar. For each day that I do my task of writing, I get to put a big red X over that day. Don’t break the chain. Skipping one day makes it easier to skip the next.

This is very simple advice that, when applied correctly, generates hundreds of millions of page views for your tech blog when you inevitably write about it in the context of technology and productivity and time management, which are themselves the Holy Trinity of Tech Journalism That Get People To Link To You. If Jerry Seinfeld had also said “don’t break the chain, and also I hate JavaScript” it would have hit The Tech Industry Traffic Sweet Spot and the internet would have collapsed.


I’m into the idea of streaks. They work for me. But there’s also been an even more interesting conversation around the idea of productizing streaks.

When you take an inherently personal choice — choosing to accomplish a task on a regular basis — and put it into your product, you’re making a pretty large value statement. You’re saying that the task at hand is worth doing on a regular basis. This is usually true on the surface, but it still raises some interesting conversations about what that means.

It’s hip to put this concept into products these days, and for good reason: it encourages users to use your product regularly.

Strava has a couple “current week” widgets that makes you feel unhealthy when you’re not exercising:

Strava's current week streak

Day One gives you a visual graph of the last fifty days of journal entries:

Day One streak

And, of course, Uber's free-every-third-ride promotion:

Uber streak

Just kidding.

Games, of course, have honed these mechanics for years. Destiny, which I haven’t played because I have a life, much to the chagrin of my friends who should be in a twelve-step program, apparently only releases some weapons on specific days that you need to be online for. WoW has similar mechanics of getting you playing often and regularly. Anything to get people using your product over and over again.

What’s healthy, what’s not

I think there are spectrums of this behavior. GitHub, for example, has had our Contributions Graph up for a few years now. You get a colored square for every day you make write commit, a new issue, and so on.

Here’s my contributions graph for the last year:

@holman's streak

We’ve gotten some flak — and appropriately so, I think — for suggesting that a streak includes weekends. If your concept of “GitHub” falls along the lines of being work-related, then yeah, working on the weekends might be unhealthy, depending on your perspective. For others, working on open source on the weekends is a hobby, and a break from the grind, so to speak. Still others enjoy seeing grey squares on weekends as a badge of honor, as proof that they’re able to disconnect. And, of course, there’s plenty of people who couldn't possibly care whatsoever.

Like a lot of things in society, I think it usually comes down to the decisions of how you use a particular tool. And that’s tricky for a lot of reasons.

Take a look at my contributions graph above again. Having the concept of a daily “streak” was really, really helpful for me. I made a decision to build more than the previous few years (where I focused less on code and more on things like conferences and doing support), and that brought me a great deal of satisfaction and happiness. Promoting the concept of a streak helped me make my target, and I think it made me a better human last year.

Until it didn’t. At a certain point last fall, it stopped becoming a useful tool and became more of something I did because I did it because it was something I did. That, combined with a number of factors, led to some straight-up burnout. It’s a concept I’ve never really had to deal with. Shit, I like building things, why wouldn’t I get turned on from doing it often?

Know thyself

The difference, as I’m discovering while taking a few months off to pick up the pieces, is that you need to know what’s helpful to you and what’s not. And the real trick is that what’s helpful can be fluid. The problem really happens when you assume that what is helpful yesterday is the same thing that is helpful today. And they both can be different from what’s helpful tomorrow.

And humans are pretty shit at that, I think. It requires a lot of personal responsibility to ask yourself questions… and ask them frequently, at that. This goes for big things like work/life balance, but even for smaller things like gaming and exercise streaks. Are you working out too much, because that’s just what you do? Most athletes have gotten to the point of needing to step back and admit that whoa, I shouldn’t actually run this week because my knee is pretty fucked up right now, and I kind of wish I took it a little easier last time. Maybe then I wouldn’t have gotten to where I am now.

Man, that thought process is endemic to a lot of things in life.

Is productizing streaks a problem in our industry? Sure, at some level. Are some streaks healthy? Sure. Are some unhealthy? Sure. The real trick, though, is figuring out what really works for you, and not getting sucked into something if it ends up becoming less valuable to you.

Sometimes it’s great to break the streak.

January 22, 2015 12:00 AM

January 21, 2015

Blue Sky on Mars

Post-Publicity Personalities

Part of the horror of looking through your old tweets is discovering how much of an asshole you were when no one knew you existed.

Online behavior tends to change once you’ve actually been in the spotlight yourself. Here’s a flowchart I made to help clarify this:

As you can see, after achieving a PUBLICITY EVENT — think along the lines of a blog post, a product launch, public speaking, things like that — you scientifically have a 79% chance of becoming A Better Human, and unfortunately you have a 23% chance, scientifically, of becoming an even bigger dicknose than you were before. (This is actually a scientific, double-blind study. It’s double-blind because neither I nor you have access to the data.)

It’s cool if you haven’t gotten to a PUBLICITY EVENT yet, and it’s cool if you never do. After all, there’s a non-negative chance that you might end up being a jerk. But for those that haven’t, here’s how something like this goes:

You: Hey! I have something important I’d like to share with the world!
The World: Fuck you!

Of The World, the silent majority will likely appreciate what you’ve shared, the minority will be complimentary, and the fringe extreme will tell you in very specific detail why you shouldn’t have been born and also your mother is of suspicious descent as well.

Your major mistake, of course, is that you continue to stubbornly refuse to stop being a human being and thus genetically will focus exclusively on the feedback from the latter group of people. And your feeble human emotions will be all like “WAHHHHH I’M GOING TO TYPE ANGRY TWEETS ALL NIGHT! I CAN FIX THIS!” in-between watching episodes of Star Trek: The Next Generation, because Captain Jean-Luc Picard is the only one who really gets you right now.

Then the split happens

I think that’s the point where things change for some people. Everyone grows a bit of a hardened shell, but for some people it makes them jerks. For others, it makes them more accommodating. I look back at my old tweets and marvel at how abrasive they sometimes were towards apps I used every day, company decisions I didn’t understand at the time, and so on. In that case, maybe it’s just a byproduct of growing up: the more experiences you go through, the more understanding you are of flaws.

This isn’t a matter of right or wrong, in my mind. People can legitimately fuck up, or be wrong, or do something wrong. That’s always going to happen throughout the timeline of human civilization. But it’s how you carry yourself when you disagree that’s interesting to me. A lot of people, once they’ve done it themselves, understand that putting themselves out there is a scary thing. They’re able to disagree, but in a way that isn’t carpet bombing “fuck you” over tweets.

All of this is why I think it’s great for more people to write blog posts, to contribute to open source, to learn about public speaking, and so on. (If only we could filter by that metric!) The amount of respect I have for someone who just did their first five-minute lightning talk, even if it wasn’t perfect, is unbounded. It’s a hard first step to make. And even though I was being facetious about the percentage breakdown earlier, I do think that giving more people the spotlight generates a more understanding atmosphere for everyone.

So, create. Help others create. And be less of a meanie online, when you can.

If you didn’t like this internet article, please address your concerns to @holman on Twitter, and please include a link to your favorite TNG episode — thanks in advance!

January 21, 2015 12:00 AM

January 20, 2015

Blue Sky on Mars

You’ll Always Miss Being in the Basement

We were tucked away in the corner room of a damp, musky basement.

A walking cliché, really: a handful of basically kids, working on The Next Big Thing, getting paid practically nothing, in a startup on University Ave in Palo Alto. We’d see the Facebook kids across the street dress up for their toga parties and hundred million user celebrations. Listen in on VCs courting people like us at lunch. Go to the never-ending list of meetups every week.

Our shabby basement

The founder and the CEO had done it all before in the previous boom, growing the company to billions. They knew what growth actually meant for a company; they had lived it.

One day, in-between talking about our obviously bright and inevitable future and where we wanted to take the fledgling company, the founder stopped and said something that sounded like a lightning bolt to me, even back then:

One day this company will get huge. You’ll help hire your replacements, and they’ll hire still more people under them. We’ll be making tons of money, we’ll IPO, we’ll be famous, we’ll move into bigger and better offices.

As great as that’s going to be, I guarantee you one thing: you’re always going to miss being down in this basement, with these people.

Like many things he said, this was true at face value, but it just wasn’t in the cards for our particular company itself. The company was perpetually just about to close the one massive sale that would make us rich, at least according to our sales team.

But he was right. Even now, parts of me want to be back in that basement. I want to be working with that team, on a product we thought would be the next big thing, laughing with each other on the walk to lunch. Feeling like a part of something.

Most decent companies have a basement. Even if the company isn’t successful, or rich, or famous, or doesn’t even have a physical basement, they have a basement. Even if the work itself was grueling and underappreciated… a lot of pride can come from within a team’s frustration, at times.

I’m stretching the metaphor, but money can’t buy basements. My favorite memories from my five years at GitHub are simple and cheap. Having our weekly small-sided lunchtime soccer game against Atlassian. Passionately arguing the style of music required for us to finish Pull Requests. Organizing drinks with Square back when we both could fit into one tiny bar. Debating scaling of our respective sites with Heroku friends (“it’s just a fucking Git repository, that’s easy!”, we would inevitably learn). Literally every one of our office dogs.

Look: there’s plenty to worry about in a growing company. Product, trajectory, hiring. At the end of the day, a company needs to make money. That’s how you pay the employees that contribute so much to your success.

But at some point, you’re going to want to end up back in that basement.

Probably the folly of being human, maybe. You remember the good times and forget the bad. You take to glamorous reinterpretations of history. But it’s a nice comfort nonetheless. It’s why I do things, anyway.

January 20, 2015 12:00 AM

January 19, 2015

John Udell

TypeScript: Industrial-strength JavaScript

Historians who reflect on JavaScript's emergence as a dominant programming language in the 21st century may find themselves quoting Donald Rumsfeld: "You go to war with the army you have, not the army you might wish to have."

For growing numbers of programmers, JavaScript is the army we have. As we send it into the field to tackle ever more ambitious engagements both client and server side, we find ourselves battling the language itself.

[ JavaScript rules the Web -- but how do you harness its power? Find out in InfoWorld's special report, "The triumph of JavaScript." | Keep up with hot topics in app dev with InfoWorld's Strategic Developer blog. ]

JavaScript was never intended for large programs built by teams that use sophisticated tools to manage complex communication among internal modules and external libraries. Such teams have long preferred strongly typed languages like Java and C#. But those languages' virtual machines never found a  home in the browser. It was inevitable that JavaScript alternatives and enhancements would target the ubiquitous JavaScript virtual machine.

To read this article in full or to leave a comment, please click here

by Jon Udell at January 19, 2015 11:00 AM

Blue Sky on Mars

How GitHub Writes Blog Posts

You may have heard that we use GitHub to build GitHub. But our company isn't built on just code: there's strategy and HR policies and internal communications and a multitude of other documents that need to get written, too. All of which we can use GitHub to write as well.

Now, we're a bit of an oddball company. You'll have a tough time finding another organization with so many lawyers who know Git. But much of our code workflow can be easily applied elsewhere in the company, too: like writing our blog posts.

The Blog Post

The very first rule of GitHub, which has been the rule of law since day one: if you've built a new feature, you get to write the announcement post for it.

Letting a team write their own announcement is a nice nod to all the work they've put into the project. This habit is similar to the author shout-outs Apple would tuck into the About windows of early Macintosh apps. Celebrating each other's successes is a really important facet of a successful culture: shipping is contagious, after all.

Like code, though, you want the work you're putting out into the world to be approachable and to make sense. A great way to do that is to solicit outside opinions. So, also like code, our blog posts go through the pull request pipeline as well. Cut a new branch on our blog posts repository, write a new draft, and open a pull request to start the discussion (all of which you can do without leaving

Continuous Integration

The first thing that happens when we normally open a pull request is to run the tests with continuous integration. Blog posts don't need to be too different, surprisingly enough. Think of this process like a syntax linter for your words: breaking the build isn't necessarily bad, per se, but it might give you suggestions you might want to incorporate. It gives you immediate feedback without requiring a lot of additional overhead by our blog editors.

For example, here's the current small suite of tests that run on push:

  • Image alt tags. All images in your Markdown should have ’alt’ tags for accessibility purposes.
  • Image size. All images used should be less than a megabyte in size. Helps prevent accidental screenshots that might be massive and slow down the mobile experience.
  • Image promotions. All images should look good on high-density displays. Currently this just means the width should be twice the size of the width of the content on-page.
  • CDN-hosted images. All images should be hosted on our CDN, for performance, security, and longevity reasons.
  • No emoji. We use Emoji all over the place internally, but they're usually not quite the right tone in external communications.
  • No "Today, ". We've suddenly started realizing that we had gajillions of posts in the past that started out like "Today, we're pleased to announce x". This is just a silly style thing, but we decided to avoid this common intro in future posts for the sake of variety and sounding like a human.

Failing any of these tests will break the build for your specific pull request. You can still merge it and publish the post... CI in this repository is really just used as a suggestion rather than a hard-and-fast rule. The important part is that running these spot checks reduces the amount of work and time needed for someone else to manually inspect the size of your images, for example.

Merging a red pull request won't break the overall repository since these particular tests are only run on files changed in each branch:

diffable_files = `git diff --name-only --diff-filter=ACMRTUXB origin/master... | grep .md`.split("\n")

The tests themselves are just a few lines of straightforward Ruby. Simple tests shouldn't require complex code. Here’s a Gist of what we use currently, if you’re interested.


With your draft up and running with CI, it's time to actually fix your egregious sentence structure problems.

The first step is to get the right eyeballs on your draft. Most of the people who are interested in helping edit your drafts probably have already watched the repository and have gotten a notification about your post, so you’ll usually only have to wait a few hours to start getting meaningful copy edits suggested to you (or applied directly to your pull request, if they're non-controversial and simple changes).

It’s also helpful to mention a team in the pull request, of course. We have a bunch of writers on @github/copy that love to help out, and it’s also helpful to ping the relevant team who worked on the feature with you for their input as well:

Mentioning teams


At that point, we can rely upon some of the cool stuff we’ve already built into GitHub: prose diffs, for example, takes our Markdown and renders it to something closer to what the final result will look like on the blog. We use inline diff commenting for more specific word changes, and normal comments in a pull request for high-level “how about taking the end of this post in this direction?” thoughts.

Since it’s all versioned, we have a good history of changes from the initial draft to the final draft. Can’t tell you the number of times I’ve been happy I could go back and fish out some phrasing from earlier revisions of a draft.


The way we write blog posts is definitely a stretch of dogfooding, probably moreso than other approaches we’ve taken at GitHub. Whether this jives with how your company operates is up to you. If not, be sure to check out some other collaborative writing tools, like Google Docs. (My favorite is @natekontny’s Draft, by the way.)

There’s two reasons why I wanted to share this workflow, though. The first is CI for prose, which I think is hilarious and actually has saved us a bit of time here and there.

The second, though, is the same reason why I dig sharing code on GitHub: it makes information accessible. I want more of the company to have access to more of itself. Marketing tends to stereotypically be sealed away in an ivory tower in most companies, and I think that’s what helps make it feel icky and inauthentic in a lot of cases.

By writing communications in an accessible manner internally, you benefit from a voice that’s hopefully more diverse, more impactful, and more genuine. And that’s the type of marketing that people appreciate.

January 19, 2015 12:00 AM

January 15, 2015

Ian Bicking

A Product Journal: Conception

I’m going to try to journal the process of a new product that I’m developing in Mozilla Cloud Services

When Labs closed and I entered management I decided not to do any programming for a while. I had a lot to learn about management, and that’s what I needed to focus on. Whether I learned what I need to I don’t know, but I have been getting a bit tired.

We went through a fairly extensive planning process towards the end of 2014. I thought it was a good process. We didn’t end up where we started, which is a good sign – often planning processes are just documenting the conventional wisdom and status quo of a group or project, but in a critically engaged process you are open to considering and reconsidering your goals and commitments.

Mozilla is undergoing some stress right now. We have a new search deal, which is good, but we’ve been seeing declining marketshare which is bad. And then when you consider that desktop browsers are themselves a decreasing share of the market it looks worse.

The first planning around this has been to decrease attrition among our existing users. Longer term much of the focus has been in increasing the quality of our product. A noble goal of course, but does it lead to growth? I suspect it can only address attrition, the people who don’t use Firefox but could won’t have an opportunity to see what we are making. If you have other growth techniques then focusing on attrition can be sufficient. Chrome for instance does significant advertising and has deals to side-load Chrome onto people’s computers. Mozilla doesn’t have the same resources for that kind of growth.

When finished up the planning process I realized, damn, all our plans were about product quality. And I liked our plan! But something was missing.

This perplexed me for a while, but I didn’t really know what to make of it. Talking with a friend about it he asked then what do you want to make? – a seemingly obvious question that no one had asked me, and somehow hearing the question coming at me was important.

Talking through ideas, I reluctantly kept coming back to sharing. It’s the most incredibly obvious growth-oriented product area, since every use of a product is a way to implore non-users to switch. But sharing is so competitive. When I first started with Mozilla we would obsess over the problem of Facebook and Twitter and silos, and then think about it until we threw our hands up in despair.

But I’ve had this trick up my sleeve that I pull out for one project after another because I think it’s a really good trick: make a static copy of the live DOM. Mostly you just iterate over the elements, get rid of scripts and stuff, do a few other clever things, use <base href> and you are done! It’s like a screenshot, but it’s also still a webpage. I’ve been trying to do something with this for a long time. This time let’s use it for sharing…?

So, the first attempt at a concept: freeze the page as though it’s a fancy screenshot, upload it somewhere with a URL, maybe add some fun features because now it’s disassociated from its original location. The resulting page won’t 404, you can save personalized or dynamic content, we could add highlighting or other features.

The big difference with past ideas I’ve encountered is that here we’re not trying to compete with how anyone shares things, this is a tool to improve what you share. That’s compatible with Facebook and Twitter and SMS and anything.

If you think pulling a technology out of your back pocket and building a product around it is like putting the cart in front of the horse, well maybe… but you have to start somewhere.

[The next post in the series is The Tech Demo]

by Ian Bicking at January 15, 2015 06:00 AM

January 14, 2015

Greg Linden

More on what to advertise when there is no commercial intent

Some of the advertising out there is getting spooky. If you look at a product at many online stores, that product will then follow you around the web.

Go to BBC News, for example, and there will be those dishes you were looking at yesterday on Overstock. Not just any dishes, the exact same dishes. Just in case you forgot about them, there they are again next time you go. And again. And again.

A few years ago, I wrote an article, "What to advertise when there is no commercial intent?". That article suggested that, on sites like news sites, we might not have immediate commercial intent, and might have to reach back into the past to find strong commercial intent. It advocated for personalized advertising that helped people discover interesting products and deals related to strong commercial intent they had earlier.

However, this did not mean that you should just show the last product I looked at. That is refinding, not personalized recommendations. Refinding is all a lot of these ads are doing. You look at a chair, ads follow you around the web showing you ads for that same chair that you already know about over and over again. That's not discovery. That's spooky and not helpful.

Personalized ads should help people discover things they don't know about related to past purchase intent. If I look at a chair, show me highly reviewed similar furniture and good coupons and big deals related in some non-obvious way to that chair and that store. Don't just show me the same chair again. I know about that chair. Show me something I don't know. Help me discover something I haven't found yet.

I understand the reason these companies are doing refinding is because it's hard to do anything better. Doing useful recommendations of related products and deals is hard. Helping people discover something new and interesting is hard. Personalized recommendations requires a lot of data, clever algorithms, and a huge amount of work. Refinding is trivially easy.

But publishers aren't doing themselves any favors by allowing these startups to get away with this kind of useless advertising. As a recent study says, "the practice of running annoying ads can cost more money than it earns." That short-term revenue bump from these spooky refinding ads is like a sugar rush, feels good while it lasts, but hurts in the long-term.

They can and should do better. Personalization, including personalized advertising, should be about helping people discover things they could not easily find on their own. Personalization should not be refinding, just showing what I found before, just exposing my history. Personalization should be helpful. Personalization should be discovery.

by Greg Linden ( at January 14, 2015 04:11 PM

January 13, 2015

Ian Bicking

Being A Manager Is Lonely

Management is new for me. I have spent a lot of time focusing on the craft of programming, now I focus on the people who focus on the craft of programming.

During the fifteen years I’ve been participating in something I’ll call a developer community, I’ve seen a lot of progress. Sometimes we wax nostalgic with an assertion that no progress has been made… but progress has been made. We, as professionals, hobbyists, as passionate practitioners understand much more about how to test, design, package, distribute, collaborate around code. And just about how to talk about it all.

I am a firm believer that much of that progress is due to the internet. There were technological advancements, sure. And there have been books teaching practice. But that’s not enough. There were incredible ideas about programming in the 70s! But there wasn’t the infrastructure to help developers assimilate those ideas.

I put more weight on people learning than on people being taught. If the internet was just a good medium for information dispersal — a better kind of book — then that is nice, but not transformational. The internet is more than that: it’s a place to discuss, and disagree, and watch others discussing. You can be provocative, and then step back and take on a more conservative opinion – a transformation most people would be too shy to commit to print. (As if substantial portion of people have ever had the option to consider what they want to commit to print!)

I think a debate is an opportunity; seldom an opportunity to convince anyone else of what you think, but a chance to understand why you think what you do, to come to a more mature understanding, and maybe create a framework for future changes of opinion. This is why I bristle at the phrase “just choose the right tool for the job” – this phrase is an attempt to shut down the discussion about what the right tool for the job is!

This is a long digression, but I am nostalgic for how I grew into my profession. Nostalgic because now I cannot have this. I cannot discuss my job. I cannot debate the details. I cannot tell anecdotes to elucidate a point. I cannot discuss the policies I am asked to implement – the institutional instructions applied to me and through me. I can only attempt to process my experiences in isolation.

And there are good reasons for this! While this makes me sad, and though I still question if there is not another way, there are very good reasons why I cannot talk about my work. I am in a leadership position, even if only a modest and subordinate leader. There is a great deal of potential for collateral damage in what I say, especially if I talk about the things I am thinking most about. I think most about the tensions in my company, interpreting the motivations of the leadership in the company, I think about the fears I sense in my reports, the unspoken tensions about what is done, expected, aspired to. I can discuss this with the individuals involved, but they are the furthest thing from a disinterested party, and often not in a place to develop collaborative wisdom.

This is perhaps unfair. I work with very thoughtful people. Our work is grounded in a shared mission, which is a powerful thing. But it’s not enough.

Are we, as a community of managers (is there such a thing?) becoming better? Yes, some. There are management consultants and books and other material about management, and there is value in that. But it is not a discussion, it is not easy to assimilate. I don’t get to interact with a community of peers.

On the topic of learning to manage, I have listened to many episodes of Manager Tools now. I’ve learned a lot, and it’s helped me, even if they are more authoritarian than makes me comfortable. I’m writing this now after listening to a two part series: Welcome To They: Professional Subordination and Part 2.

The message in these podcasts is: it is your responsibility as a manager to support the company’s decisions. Not just to execute on them, but to support them, to communicate that support, and if you disagree then you must hide that disagreement in the service of the company. You can disagree up — though even that is fraught with danger — but you can’t disagree down. You must hold yourself apart from your team, putting a wall between you and your team. To your team you are the company, not a peer.

There is a logical consistency to the argument. There is wisdom in it. The impact of complaints filtering up is much different than the impact of complaints filtering down. In some sense as a manager you must manufacture your own consensus for decisions that you cannot affect. You are probably doing your reports a favor by positively communicating decisions, as they will be doing themselves a favor by positively engaging with those decisions. But their advice is clear: if you are asked your opinion, you must agree with the decision, maybe stoically, but you must agree, not just concede. You must speak for the company, not for yourself.

Fuck. Why would I want to sign up for this? The dictate they are giving me is literally making me sad. If it didn’t make any sense then I might feel annoyed. If I thought it represented values I did not share then I might feel angry. But I get it, and so it makes me sad.

Still, I believe in progress. I believe we can do better than we have in the past. I believe in unexplored paths, in options we aren’t ready to compare to present convention, in new ways of thinking about problems that break out of current categories. All this in management too – which is to say, new ways to form and coordinate organizations. I think those ideas are out there. But damn, I don’t know what they are, and I don’t know how to find out, because I don’t know how to talk about what we do and that’s the only place where I know how to start.

[I wrote a followup in Encouraging Positive Engagement]

by Ian Bicking at January 13, 2015 06:00 AM

January 12, 2015

Decyphering Glyph

The Glyph

As you may have seen me around the Internet, I am typically represented by an obscure symbol.

The “Glyph” Glyph

I have been asked literally hundreds of times about the meaning of that symbol, and I’ve always been cryptic in response because I felt that a full explanation is too much work. Since the symbol is something I invented, the invention is fairly intricate, and it takes some time to explain, describing it in person requires a degree of sustained narcissism that I’m not really comfortable with.

You all keep asking, though, and I really do appreciate the interest, so thanks to those of you who have asked over and over again: here it is. This is what the glyph means.

Ulterior Motive

I do have one other reason that I’m choosing to publish this particular tidbit now. Over the course of my life I have spent a lot of time imagining things, doing world-building for games that I have yet to make or books that I have yet to write. While I have published fairly voluminously at this point on technical topics (more than once on actual paper), as well as spoken about them at conferences, I haven’t made many of my fictional ideas public.

There are a variety of reasons for this (not the least of which that I have been gainfully employed to write about technology and nobody has ever wanted to do that for fiction) but I think the root cause is because I’m afraid that these ideas will be poorly received. I’m afraid that I’ll be judged according to the standards for the things that I’m now an expert professional at – software development – for something that I am a rank amateur at – writing fiction. So this problem is only going to get worse as I get better at the former and keep not getting practice at the latter by not publishing.

In other words, I’m trying to break free of my little hater.

So this represents the first – that I recall, at least – public sharing of any of the Divunal source material, since the Twisted Reality Demo Server was online 16 years ago. It’s definitely incomplete. Some of it will probably be bad; I know. I ask for your forbearance, and with it, hopefully I will publish more of it and thereby get better at it.


I have been working on the same video game, off and on, for more or less my entire life. I am an extremely distractable person, so it hasn’t seen that much progress - at least not directly - in the last decade or so. I’m also relentlessly, almost pathologically committed to long-term execution of every dumb idea I’ve ever had, so any minute now I’m going to finish up with this event-driven networking thing and get back to the game. I’ll try to avoid spoilers, in case I’m lucky enough for any of you ever actually play this thing.

The symbol comes from early iterations of that game, right about the time that it was making the transition from Zork fan-fiction to something more original.

Literally translated from the in-game language, the symbol is simply an ideogram that means “person”, but its structure is considerably more nuanced than that simple description implies.

The world where Divunal takes place, Divuthan, was populated by a civilization that has had digital computers for tens of thousands of years, so their population had effectively co-evolved with automatic computing. They no longer had a concept of static, written language on anything like paper or books. Ubiquitous availability of programmable smart matter meant that the language itself was three dimensional and interactive. Almost any nuance of meaning which we would use body language or tone of voice to convey could be expressed in varying how letters were proportioned relative to each other, what angle they were presented at, and so on.

Literally every Divuthan person’s name is some variation of this ideogram.

So a static ideogram like the one I use would ambiguously reference a person, but additional information would be conveyed by diacritical marks consisting of other words, by the relative proportions of sizes, colors, and adornments of various parts of the symbol, indicating which person it was referencing.

However, the game itself is of the post-apocalyptic variety, albeit one of the more hopeful entries in that genre, since restoring life to the world is one of the player’s goals. One of the things that leads to the player’s entrance into the world is a catastrophe that has mysteriously caused most of the inhabitants to disappear and disabled or destroyed almost all of their technology.

Within the context of the culture that created the “glyph” symbol in the game world, it wasn’t really intended to be displayed in the form that you see it. The player first would first see such a symbol after entering a ruined, uninhabited residential structure. A symbol like this, referring to a person, would typically have adornments and modifications indicating a specific person, and it would generally be animated in some way.

The display technology used by the Divuthan civilization was all retained-mode, because I imagined that a highly advanced display technology would minimize power cost when not in use (much like e-paper avoids bleeding power by constantly updating the screen). When functioning normally, this was an irrelevant technical detail, of course; the displays displayed what you want them to display. But after a catastrophe that has disrupted network connectivity and ruined a lot of computers, this detail is important because many of the displays were still showing static snapshots of a language intended to use motion and interactivity as ways to convey information.

As the player wandered through the environment, they would find some systems that were still active, and my intent was (or “is”, I suppose, since I do still hold out hope that I’ll eventually actually make some version of this...) that the player would come to see the static, dysfunctional environment around them as melancholy, and set about restoring function to as many of these devices as possible in order to bring the environment back to life. Some of this would be represented quite concretely as time-travel puzzles later in the game actually allowed the players to mitigate aspects of the catastrophe that broke everything in the first place, thereby “resurrecting” NPCs by preventing their disappearance or death in the first place.



Coen refers to the self, the physical body, the notion of “personhood” abstractly. The minified / independent version is an ideogram for just the head, but the full version as it is presented in the “glyph” ideogram is a human body: the crook at the top is the head (facing right); the line through the middle represents the arms, and the line going down represents the legs and feet.

This is the least ambiguous and nuanced of all the symbols. The one nuance is that if used in its full form with no accompanying ideograms, it means “corpse”, since a body which can’t do anything isn’t really a person any more.



This is the trickiest ideogram to pronounce. The “ks” is meant to be voiced as a “click-hiss” noise, the “e” has a flat tone like a square wave from a synthesizer, and the “t” is very clipped. It is intended to reference the power-on sound that some of the earliest (remember: 10s of thousands of years before the main story, so it’s not like individuals have a memory of the way these things sounded) digital computers in Divuthan society made.

Honestly though if you try to do this properly it ends up sounding a lot like the English word “cassette”, which I assure you is fitting but completely unintentional.

Kset refers to algorithms and computer programs, but more generally, thought and the life of the mind.

This is a reference to the “Ee” spell power rune in the 80s Amiga game, Dungeon Master, which sadly I can’t find any online explanation of how the manual described it. It is an object poised on a sharp edge, ready to roll either left or right - in other words, a symbolic representation of a physical representation of the algorithmic concept of a decision point, or the computational concept of a branch, or a jump instruction.



Edec refers to connectedness. It is an ideogram reflecting a social graph, with the individual below and their many connections above them. It’s the general term for “social relationship” but it’s also the general term for “network protocol”. When Divuthan kids form relationships, they often begin by configuring a specific protocol for their communication.

This is how boundary-setting within friendships and work environments (and, incidentally, flirting) works; they use meta-protocol messages to request expanded or specialized interactions for use within the context of their dedicated social-communication channels.

Unlike most of these other ideograms, its pronunciation is not etymologically derived from an onomatopoeia, but rather from an acronym identifying one of the first social-communication protocols (long since obsoleted).



“Zenk” is the ideogram for creation. It implies physical, concrete creations but denotes all types of creation, including intellectual products.

The ideogram represents the Divuthan version of an anvil, which, due to certain quirks of Divuthan materials science that is beyond the scope of this post, doubles for the generic idea of a “work surface”. So you could also think of it as a desk with two curved legs. This is the only ideogram which represents something still physically present in modern, pre-catastrophe Divuthan society. In fact, workshop surfaces are often stylized to look like a Zenk radical, as are work-oriented computer terminals (which are basically an iPad-like device the size of a dinner table).

The pronunciation, “Zenk”, is an onomatopoeia, most closely resembled in English by “clank”; the sound of a hammer striking an anvil.



“Lesh” is the ideogram for communication. It refers to all kinds of communication - written words, telephony, video - but it implies persistence.

The bottom line represents a sheet of paper (or a mark on that sheet of paper), and the diagonal line represents an ink brush making a mark on that paper.

This predates the current co-evolutionary technological environment, because appropriately for a society featured in a text-based adventure game, the dominant cultural groups within this civilization developed a shared obsession for written communication and symbolic manipulation before they had access to devices which could digitally represent all of it.

All Together Now

There is an overarching philosophical concept of “person-ness” that this glyph embodies in Divuthan culture: although individuals vary, the things that make up a person are being (the body, coen), thinking (the mind, kset), belonging (the network, edec), making (tools, zenk) and communicating (paper and pen, lesh).

In summary, if a Divuthan were to see my little unadorned avatar icon next to something I have posted on twitter, or my blog, the overall impression that it would elicit would be something along the lines of:

“I’m just this guy, you know?”

And To Answer Your Second Question

No, I don’t know how it’s pronounced. It’s been 18 years or so and I’m still working that bit out.

by Glyph at January 12, 2015 09:00 AM

January 09, 2015

Reinventing Business

The Winter Tech Forum, Feb 23 - 27 in Crested Butte

Here's the announcement, or you can go directly to the site.

by (Bruce Eckel) at January 09, 2015 07:05 PM

January 08, 2015

Giles Bowkett

Versioning Is A Nuanced Social Fiction; SemVer Is A Blunt Instrument

David Heinemeier-Hansson said something relatively lucid and wise on Twitter recently:

To his credit, he also realized that somebody else had already said it better.

Here's the nub and the gist of Jeremy Ashkenas's Gist:

SemVer tries to compress a huge amount of information — the nature of the change, the percentage of users that will be affected by the change, the severity of the change (Is it easy to fix my code? Or do I have to rewrite everything?) — into a single number. And unsurprisingly, it's impossible for that single number to contain enough meaningful information...

Ultimately, SemVer is a false promise that appeals to many developers — the promise of pain-free, don't-have-to-think-about-it, updates to dependencies. But it simply isn't true.

It's extremely worthwhile to read the whole thing.

Here's how I see version numbers: they predate Git, and Git makes version numbers pretty stupid if you take those numbers literally, because we now use hashes like 64f2a2451381c80dff1 to identify specific versions of our code bases. Strictly speaking, version numbers are fictional. If you really want to know what version you're looking at, the answer to that question is not a number at all, but a Git hash.

But we still use version numbers. We do this for the same reason that, even if we one day replace every car on the road with an error-proof robot which is only capable of perfect driving, we will still have speed limits, brake lights, and traffic signs. It's the same reason there's an urban legend that the width of the Space Shuttle ultimately derives from the width of roads in Imperial Rome: systems often outlive their original purposes.

Version numbers were originally used to identify specific versions of a code base, but that hasn't been strictly accurate since the invention of version control systems, whose history goes back at least 43 years, to 1972. As version control systems became more and more fine-grained, version numbers diverged further and further from the actual identifiers we use to index our versioning systems, and thus "version numbers" became more and more a social fiction.

Note that this is not necessarily a bad thing. Money is a social fiction, and an incredibly useful one. But SemVer is an attempt to treat the complexities of a social fiction as if they were very deterministic and controlled.

They are not.

Which means SemVer is an attempt to brutally oversimplify an inherently complex problem.

There's a lot of good commentary on these complexities. Justin Searls gave a very good presentation which goes into why these problems are inherently complex, and inherently social.

I'm not saying that I don't think SemVer's goals are important. But I do think SemVer's a clumsy replacement for nuanced versioning, and an incomplete answer for "how do we demarcate incompatibility risks in systems made up of extremely numerous libraries written by extremely numerous people?"

Because version numbers are a social fiction, entirely distinct from the "numbers" we use to actually version our software in modern version control systems, choosing new version numbers is primarily a matter of communicating with your users. Like all communication, it is inherently complex and nuanced. If it is possible at all to reliably automate the communication of nuance, the medium of communication will probably not be a trio of numbers, because the problem space simply has far more dimensions than three.

But for the same reason, I kind of think version numbers verge on ridiculous whether they're trying to color within the SemVer lines or not. There's only so much weight you can expect a social fiction to carry before it cracks at the seams and falls apart. Even the idea of a canonical repo is a little silly in my opinion.

You can see why the canonical repo is a mistake if you look at a common antipattern on GitHub: a project is abandoned, but development continues within multiple forks of the project. Which repo is now canonical? You have to examine each fork, and discover how well it keeps up with the overall, now-decentralized progress of the project. You'll often find that Fork A does a better job with updates related to one aspect of the project, while Fork B does a better job with updates related to another aspect. And it's a manual process; no GitHub view exists which will make it particularly easy for you to determine which of the still-in-progress forks are continuing ahead of the "canonical" repo.

At the very least, in a situation like this, you have to differentiate between the original repo and the canonical one. I think that much is indisputable. But I'd argue also that the basic idea of a canonical repo operates in defiance of the entire history of human language. In fact, rumor has it that GitHub itself runs on a private fork of Rails 2, which illustrates my point perfectly, by constituting a local dialect.

(Update: GitHub ran on a private fork of Rails 2 for many years, but moved to Rails 3 in September 2014. Thanks to Florian Gilcher for the details.)

I'd like to see some anthropologists and linguists research our industry, because the modern dev world, with its countless and intricately interwoven dependencies, presents some really complex and subtle problems.

by Giles Bowkett ( at January 08, 2015 09:21 AM

January 07, 2015

Giles Bowkett

One Major Difference Between Clojure And Common Lisp

In the summer of 2013, I attended an awesome workshop called WACM (Workshop on Algorithmic Computer Music) at the University of California at Santa Cruz. Quoting from the WACM site:

Students will learn the Lisp computer programming language and create their own composition and analysis software. The instruction team will be led by professor emeritus David Cope, noted composer, author, and programmer...

The program features intensive classes on the basic techniques of algorithmic composition and algorithmic music analysis, learning and using the computer programming language Lisp. Students will learn about Markov-based rules programs, genetic algorithms, and software modeled on the Experiments in Musical Intelligence program. Music analysis software and techniques will also be covered in depth. Many compositional approaches will be discussed in detail, including rules-based techniques, data-driven models, genetic algorithms, neural networks, fuzzy logic, mathematical modeling, and sonification. Software programs such as Max, Open Music, and others will also be presented.

It was as awesome as it sounds, with some caveats; for instance, it was a lot to learn inside of two weeks. I was one of a very small number of people there with actual programming experience; most of the attendees either had music degrees or were in the process of getting them. We worked in Common Lisp, but I came with a bunch of Clojure books (in digital form) and the goal of building stuff using Overtone.

I figured I could just convert Common Lisp code almost directly into Clojure, but it didn't work. Here's a gist I posted during the workshop:

This attempt failed for a couple different reasons, as you can see if you read the comments. First, this code assumes that (if (null list1)) in Common Lisp will be equivalent to (if (nil? list1)) in Clojure, but Clojure doesn't consider an empty list to have a nil value. Secondly, this code tries to handle lists in the classic Lisp way, with recursion, and that's not what you typically do in Clojure.

Clojure's reliance on the JVM makes recursion inconvenient. And Clojure uses list comprehensions, along with very sophisticated, terse destructuring assignments, to churn through lists much more gracefully than my Common Lisp code above. Those 7 lines of Common Lisp compress to 2 lines of Clojure:

(defn build [seq1 seq2]
(for [elem1 seq1 elem2 seq2] [elem1 elem2]))

A friend of mine once said at a meetup that Clojure isn't really a Lisp; it's "a fucked-up functional language" with all kinds of weird quirks which uses Lisp syntax out of nostalgia more than anything else. To me, this isn't enough to earn Clojure that judgement, which was kinda harsh. I think I like Clojure more than he does. But, at the same time, if you're looking to translate stuff from other Lisps into Clojure, it's not going to be just copying and pasting. Beyond inconsequential, dialect-level differences like defn vs. defun, there are deeper differences which steepen the learning curve a little.

by Giles Bowkett ( at January 07, 2015 10:34 AM

January 02, 2015

Greg Linden

Quick links

Some of the best of what I've been thinking about lately:
  • Tiny cheap satellites will provide near real-time imagery of the entire Earth to anyone who wants it, starting in about a year ([1] [2] [3])

  • Amplifying motion and color changes in video, which allows augmented perception ([1] [2])

  • Birds can hear the very low frequency sound produced by severe weather and are able to flee well in advance of incoming storms ([1])

  • Nice example of blending computer science with another field, in this case genealogy, to yield big new gains ([1])

  • "An energy gradient 1000 times greater than traditional particle accelerators" ([1])

  • People "don't want to watch commercials, are fleeing networks, hate reruns, are increasingly bored by reality programming, shun print products and, oh, by the way, don’t want to pay much for content either. Yikes." ([1] [2])

  • Everything we know Google is working on ([1])

  • Funny and informative: "Riding in a Google Self-Driving Car" ([1])

  • Google is rejecting security based on firewalls ([1] [2] [3])

  • "Whether you call it a Star Trek Universal Translator or Babel fish, Microsoft is building it, and it's incredible." ([1])

  • "Every dollar a worker earns in a research field spills over to make the economy $5 better off. Every dollar a similar worker earns in finance comes with a drain, making the economy 60 cents worse off." ([1])

  • "I’m a big believer in making effectively infinite computing resources available internally ... [Give] teams the resources they need to experiment ... All employees should be limited only by their ability rather than an absence of resources or an inability to argue convincingly for more." ([1])

  • "We think of it as a one-on-one tutor. It will test you and generate a personal lesson plan just for you." ([1])

  • "Apparently, a sufficient number of puppies can explain any computer science concept. Here we have multithreading:" ([1])

  • Fantastic to see a US president promoting computer programming to kids: "Becoming a computer scientist isn't as scary as it sounds. With hard work and a little math and science, anyone can do it." ([1])

by Greg Linden ( at January 02, 2015 02:10 PM

December 31, 2014

Giles Bowkett

James Golick, Rest In Peace

A pic I took of James with Matz in 2010:

I met James at a Ruby conference, probably in 2008. Later, I stopped going to Ruby conferences, but we stayed in touch via email and text and very occasionally Skype. In 2011, I probably sent more drunk emails and/or texts to James than to any other person. Not 100% sure, I don't have precise statistics on this, for obvious reasons, but I hope it paints a picture.

I have very specific dietary restrictions that make travel a real pain in the ass for me, but I figured out some workarounds, and last October I went to New York for a Node.js conference. While there, I met up with James for drinks with a few other people from the Ruby world. The next day I dropped by his office because he wanted to show off his showroom. It was pretty awesome. He was stoked about his new job as CTO of Normal Ears, as well as his new apartment, and his relocation to New York in general. With a sometimes cynical sense of humor and a badass attitude, he was kind of like a born New Yorker. Like somebody who had finally found their ideal habitat.

The last thing James ever said to me was that it had been 4 years since we had last hung out in person, and I shouldn't make it four more. I made a mental note to figure out some excuse to come back to New York in 2015.

I really wish I was at his funeral right now.

Although I've met a ton of really smart people throughout my life, there have been very few that I ever really bothered to listen to, probably owing to my own numerous and severe personality problems. But I listened to James, I think more so than he guessed. After talking to James about jazz, I spent weeks and weeks on the harmonies and melodies in the music I made. After spying on his Twitter conversations about valgrind, I went and learned C. James was the only skeptic on Node.js I ever bothered taking seriously.

He was the best kind of friend: I would always hold myself to higher standards after talking to him.

Honestly, I cannot fucking comprehend his absence. It feels like some insane hoax. And although he was a good friend, he was a light presence in my life. For others, it must be so much worse. Utmost sympathies to his family and his other friends. This was an absolutely terrible loss.

by Giles Bowkett ( at December 31, 2014 10:57 AM

December 30, 2014

Ian Bicking


This year I’m starting to understand what it is to be middle aged. I think I became middle aged in 2011, but this year maybe I know what that is.

When I was young I viewed middle age through the lens of a young person. I would think: to be middle aged is all the things I’m not right now. To never be young again. To have many fewer Firsts ahead of me. And yes, I envy the idle freedom of my youth. To wander aimlessly.

But now, here, I am learning what middle age is, not just what it is not.

Death. I am losing friends, family. I am losing people who to me were permanent. Not rationally permanent, but still permanent. But this death is only the tip of the iceberg. To grow old… here, I can now catch glimpses of what it means. Either this death is just the tip of the iceberg, or I will be the tip of someone else’s iceberg. Both are possible.

Death and responsibility. I’m now the father of two. Many young people are the father of as many or more children than I. They are all middle aged, but many are too young to know it. I’m am more than old enough to know it, these are responsibilities that can never be shed. Having children has only revealed to me my real responsibilities… to family, to friends, to my community, even my responsibility to the missing communities, the missing friendships, the missed relations.

Death and responsibility and humility. I will never meet my responsibilities; I and everyone I know will die; after that nothing can be fixed. This is the foundation of my humility. It’s not my fault. To be humble is not to be ashamed or guilty. It’s to know I am only so tall, so strong, so brave: no matter how much I may accomplish all I do is finite and any quality I have is so much smaller than the world.

But I’m alive. If I’m halfway through, I’m still but half of what I’ll be. I am all of what I know. There is still a great mystery waiting me.

And children… the responsibility is only as heavy as their import. In them I am part of a legacy that goes back before humanity, a legacy that defines meaning itself. Of course it is heavy. It isn’t easy, this responsibility is not intended to make me happy, in it I learn that happiness is itself small.

And so I am humble. I bow before a world that owes me nothing. And of all that I ask of the world, little will be delivered. That little will be my everything. Here I stand before half of my everything and it is more than I’ll ever know and ever could know. I was never so young that I could know it, even my ignorance is too vast for me to know. I don’t even know where I stand, but maybe I know I am standing. This is my middle age.

In loving memory of my grandmother, Jeanetta Bicking, 1925-2014

by Ian Bicking at December 30, 2014 06:00 AM

December 26, 2014


Better Cubic Bezier Approximations for Robert Penner Easing Equations

To create motion on the screen, there are various approaches. You could hardcode or use a physics engine for example. One of the popular and proven approach is animation with Robert Penner’s easing equations. Now, animation using easing functions with cubic-bezier coordinates is gaining popularity, especially for web development with CSS. In this post I’ll would like to share how I’ve “eased” Robert Penner Easing equations into the cubic-bezier coordinates easing functions.

TL;DR? Check out the interactive example here.
Screenshot 2014-12-26 23.31.01


Screenshot 2014-12-26 23.29.04

Easing (also referred to as Tweening) is an interesting topic, and there is a great deal of good articles on the net. Likely you are familiar with this if you have done some animation (be it javascript, actionscript, css, glsl, flash, after effects), otherwise check out this good read.

Easing Equations
So you understand about easing but you may not know who Robert Penner is and what his easing equations are. For that, I would suggest reading the chapter from his book about Motion, Tweening, and Easing. It may be old, but this was probably what popularized or influenced the way programmers go about coding or thinking about animation. Penner’s equations has been implemented in various languages and for various platforms (even if it’s not, it’d be easy to). Pick any popular animation library and it already be using his easing equations.

For myself, I love tween.js implementation of the easing equations, not just because mrdoob uses it, but because it concisely simplifies the equations to a single factor k which you can easily use it anywhere (and there are others who are rediscovering that it can be done).

Cubic Bezier Curves
Cubic bezier curves are used in many applications especially in graphics, motion graphics and animation software (to list some: illustrator, sketch, after effects).For a very simple introduction to Cubic Bezier Curves, you can watch this video. For some reasons, I find cubic bezier both simple and complex – huge amount of possibilities can be created with just 2 control points.

Cubic Bezier Tools
Which would explain the number of popular cubic bezier tools for creating css animations. It was probably Matthew Lein’s Ceaser tool that introduced the concept of approximated Penner Equations with cubic beziers (which was tweaked by hand if I recall him correctly in a twitter conversation with Lea Verou).

I eventually found that the Ceaser easing functions made it to places like LESS css mixins, Sass/SCSS and I started extracting these values into a JSON format so it’d be easy to use in javascript.

The next thing I did was try plotting the cubic bezier on canvas and add animations for visualizations. What I observed was that Ceaser’s parameters typically followed the shape of Penner’s equations, however the resulting values were still an approximation. I stacked the graphed Robert Penner’s equations from tween.js with ceaser’s and there was some amount of deviations, which is amplified with side-by-side animations or when charted on a larger canvas. One way to improve these easing functions is to fix and adjust by importing it into a tool to tweak the control points until it match up more.

Fortunately, I found another Objective-C library called CustomMediaTimingFunction that also tries approximating Robert Penner’s easing equations. My guess is that native cubic bezier support with CAMediaTimingFunction in IOS/Mac was another reason why someone wanted to convert Penner’s equations to cubic bezier functions.

As with the previous Ceaser’s equations, I extracted them to javascript, and draw them on the canvas with Penner’s equations. There were still some noticeable differences but I was pleasantly surprise to find them more accurate to Penner’s equations. I also found that KinkumaDesign’s equations were pretty off for the ease-out equations (possibly due to a different definition on what ease-out is). Another difference was that in addition to ease-in-out, there was ease-out-in easing functions. So generally KinkumaDesign’s was pretty good and perhaps what I needed to do is to hand tweak their values to make it better.

Curve Fitting
But I wasn’t satisfied with the thought of hand tweaking this values, even with a tool, and started thinking of how I could fit a bezier curve to match Penner’s equations. I remembered about a curve fitting algorithm by Philip J. Schneider (From 1990 Graphics Gems “An Algorithm for Automatically Fitting Digitized Curves”). I actually ported his C code to JS before but I just grabbed this gist, generated points from Penner’s equations and tested if the curve fitting algorithm worked. The result: the curve fitting seemed to generate bezier curve almost similar in shape to the original, but the differences was too great to be accurate. It might be possible that I could tweak the curve fitting parameters, but I didn’t want to go down that route and decided to do the brute force approach.

Brute Force
I decided that humans may not be precise when it comes to adjusting parameters, but the computer may be fast enough to try all the different combinations. The idea that I had for brute forcing is this: There are 2 control coordinates, which has a total of 4 values. If we subdivide each value to say 10, we will have 10×10 = 100 possibility for each control point. The estimated total combinations to check for is 100 * 100 = 10,000, which isn’t too bad. For each generated cubic bezier coordinates combination, I will generate the range of values and compare with the range of values generated with Penner’s equations. For comparing the ranges, the differences are squared individually and then sum. Comparing different combinations, only the coordinates which gives the least sum of squared differences will be kept. I ran this with node.js, and the subdivisions of 10-20 ran pretty quickly. Subdivisions of 25 (giving precision of up to 0.04) started give pretty accurate approximation, although the process starts to be slightly slower. For subdivision of 50 units (0.02 precision), it started to take a couple minutes to finish. In case you’re interested, the related node.js script (which is just a couple hundred of lines) is here.

So here are the bruteforce results which I think are pretty satisfactory

QuadIn: [ 0.26, 0, 0.6, 0.2 ]
QuadOut: [ 0.4, 0.8, 0.74, 1 ]
QuadInOut: [ 0.48, 0.04, 0.52, 0.96 ]
CubicIn: [ 0.32, 0, 0.66, -0.02 ]
CubicOut: [ 0.34, 1.02, 0.68, 1 ]
CubicInOut: [ 0.62, -0.04, 0.38, 1.04 ]
QuartIn: [ 0.46, 0, 0.74, -0.04 ]
QuartOut: [ 0.26, 1.04, 0.54, 1 ]
QuartInOut: [ 0.7, -0.1, 0.3, 1.1 ]
QuintIn: [ 0.52, 0, 0.78, -0.1 ]
QuintOut: [ 0.22, 1.1, 0.48, 1 ]
QuintInOut: [ 0.76, -0.14, 0.24, 1.14 ]
SineIn: [ 0.32, 0, 0.6, 0.36 ]
SineOut: [ 0.4, 0.64, 0.68, 1 ]
SineInOut: [ 0.36, 0, 0.64, 1 ]
ExpoIn: [ 0.62, 0.02, 0.84, -0.08 ]
ExpoOut: [ 0.16, 1.08, 0.38, 0.98 ]
ExpoInOut: [ 0.84, -0.12, 0.16, 1.12 ]
CircIn: [ 0.54, 0, 1, 0.44 ]
CircOut: [ 0, 0.56, 0.46, 1 ]
CircInOut: [ 0.88, 0.14, 0.12, 0.86 ]

If you prefer to to keep values clipped to 0..1, here are the parameters

QuadIn: [ 0.26, 0, 0.6, 0.2 ]
QuadOut: [ 0.4, 0.8, 0.74, 1 ]
QuadInOut: [ 0.48, 0.04, 0.52, 0.96 ]
CubicIn: [ 0.4, 0, 0.68, 0.06 ]
CubicOut: [ 0.32, 0.94, 0.6, 1 ]
CubicInOut: [ 0.66, 0, 0.34, 1 ]
QuartIn: [ 0.52, 0, 0.74, 0 ]
QuartOut: [ 0.26, 1, 0.48, 1 ]
QuartInOut: [ 0.76, 0, 0.24, 1 ]
QuintIn: [ 0.64, 0, 0.78, 0 ]
QuintOut: [ 0.22, 1, 0.36, 1 ]
QuintInOut: [ 0.84, 0, 0.16, 1 ]
SineIn: [ 0.32, 0, 0.6, 0.36 ]
SineOut: [ 0.4, 0.64, 0.68, 1 ]
SineInOut: [ 0.36, 0, 0.64, 1 ]
ExpoIn: [ 0.66, 0, 0.86, 0 ]
ExpoOut: [ 0.14, 1, 0.34, 1 ]
ExpoInOut: [ 0.9, 0, 0.1, 1 ]
CircIn: [ 0.54, 0, 1, 0.44 ]
CircOut: [ 0, 0.56, 0.46, 1 ]
CircInOut: [ 0.88, 0.14, 0.12, 0.86 ]



  • ease-in and ease-out typically fits pretty well
  • It is more difficult to fit the higher powered ease-in-out eg. QuintInOut. Sometimes the best fit creates a little bounce at the edges, which is then better to used the clipped parameters or adjust the control points a little. There are also certain easing equations eg. bounce which are not directly portable. For these cases, it might be better to multiple cubic-bezier-easing-functions.
  • getYforX() for a cubic bezier function is not simply P0 * ( 1 - u )3 + P1 * 3 * u * ( 1 - u )2 + P2 * 3 * u2 * ( 1 - u ) + P3 * u3. X needs to be re-parameterize to t before getting Y. A couple of npm modules are available if you’re lazy to implement this.

Possible Improvements

  1. smarter brute force – If we were to create more subdivisions for more accurate results, the entire brute force process would take exponentially longer. To reduce this time, a) we could run on multiprocessors, b) run it on the gpu eg. c) be a little smarter on which coordinates run brute forces on
  2. smarter fitting – perhaps there are better curve fitting algorithm that would make this bruteforce approach less relevant eg. this

While this may be something academic to do, I don’t think its pretty necessary right now. It’s probably more important to think what we can do with it, and what we can do going forward.

Conclusion & Going Forward
So in this post I’ve written a new cubic bezier tool, suggested a new set of Cubic Bezier approximations for Robert Penner’s easing equations, and showed a little of that process.

What I’ve done here might simply just a improvement from the Ceaser’s and KinkumaDesign’s parameters, or you could think of it as a “purer” form of Penner’s implementation with cubic bezier. Also I started thinking about these because of the possibilities of using Penner’s equations while making it editable easily using curves for my animation tool Timeliner. There might also be other curves and splines to consider, but allowing Penner’s equations to map to cubic bezier curves is a start. There is also a thread on better integrating editable and non-editable easing in blender which is worth considering.

Hopefully, someone finds this useful. The code is on github. Sorry if it looks like mess, it was hacked overnight.

Merry Christmas and have a happy new year!

by Zz85 at December 26, 2014 03:35 PM

December 22, 2014

Bret Victor


Enzymatic interaction

It is traditional for me, around this time of year, to provide holiday (brain) candy for folks in my technical community. This year's offering comes from a conversation with Marius Buliga about chemlambda in which he referred to β-reduction in the λ-calculus as an enzyme. This got me to thinking about explicitly modeling the COMM rule in the π-calculus as an enzyme. It turns out you can do this quite easily and it has a lot of applications!

For simplicity, i'll use the reflective, higher-order π-calculus, but extend the basic calculus with a primitive process, COMM:

P, Q ::= 0 | COMM | x?( y )P | x!( P ) | *x | P|Q
x, y ::= @P

(Note that i've adopted a slightly friendlier syntax since i first developed the reflective π-calculus. Instead of writing « » and » «, i now write @P and *x. This echoes the C and C++ programming language notation for acquiring a reference and dereferencing a pointer, respectively -- which is the key insight of the reflective calculus. We can get a kind of pointer arithmetic for channel names if we treat them as quoted process terms.)

Structural equivalence includes alpha-equivalence and makes ( P, | , 0 ) a commutative monoid. Of interest to us is


So, COMM mixes into a parallel composition.

Notice that COMM cannot penetrate an abstraction barrier, i.e. 

x?( y )P | COMM != x?( y )( P | COMM )
Enzymatic reduction in reflective π-calculus

There is an interesting design choice as to whether

x!( Q ) | COMM = x!( Q | COMM ) 

Treating COMM as enzymatic, the COMM-rule

x?( y )P | COMM | x!( Q )  P{ @Q/y } | COMM 

results in the standard dynamics, if there is just one COMM process in the expression. If there are more COMM processes, then we get a spectrum of so-called true concurrency semantics, with the usual true concurrency semantics identified with the limiting equation


which allows to saturate a parallel composition with COMM in every place where an interaction could take place.

Writing COMMi  for COMM | ... | COMM ( i times ), another set of dynamics arise from rules of the form

x?( y )P | COMMn | x!( Q )  P{ @Q/y } | COMMm

with m < n

This bears resemblance to the Ethereum platform's idea of "gas" for a smart contract's calculation. Mike Stay points out that associating a cost with communication makes it look like an action in physics. Notice that you can get some very interesting dynamics with COMM-consuming interaction rules in the presence of COMM-releasing abstractions.

One example interpretation that i think works well and provides significant motivation is to treat COMM as a compute node. A finite number of COMMs represent a finite number of nodes on which processes can run. Likewise, it is possible to model acquisition and release of nodes with cost-based COMM-rules and certain code factorings. For example, call a process inert if P | COMM cannot reduce and purely inert if P inert and P != Q | COMM for any Q. Now, suppose

COMM: x?( y )P | COMMn | x!( Q )  P{ @( Q | COMM )/y } | COMMn-1

and only allow purely inert processes in output position, i.e. x!( Q ), with Q inert. Then interaction acquires a node and dereference, *x releases one. For example,

x?( y )( *y ) | COMM | x!( Q ) 
*@( COMM | Q ) 

In terms of possible higher category theory models, the reification of the 2-cell-like COMM-rule as a 1-cell-like COMM process ought to provide a fairly simple way to make one of Mike Stay's recent higher category models work without suffering too many reductions, which has been the principal stumbling block of late. Notice that in this setting we can treat x?( y )( COMM | P ) as an explicit annotation by the programmer that interaction under an abstraction is admitted. If they want to freeze abstraction contexts, they just don't put COMM under an abstraction where an interaction could take place. In more detail

x?( y )( u?( v )P | COMM | u!( Q ) )

constitutes a programmer-supplied annotation, it's okay to reduce under the ( y )-abstraction, while

x?( y )( u?( v )P | u!( Q ) )

is an indication that the term under this abstraction is frozen, until it is released by communication on x.

Compositional interpretations of physical processes

This very simple insight of Marius' that interaction is enzymatically enabled creates a wide variety of dynamics that have broad range of applications! Merry Christmas to all and a Happy New Year!

by leithaus ( at December 22, 2014 11:53 AM

Bret Victor

December 17, 2014

Giles Bowkett

RobotsConf 2014: ZOMG, The Swag

At Empire Node, the swag bag included a portable USB charger. It kind of boggled my mind, because it was almost the only truly useful thing I'd ever received in a swag bag. That was a few months ago.

Since then, my concept of conference swag has kind of exploded. The swag bag at RobotsConf was insane. In fact, the RobotsConf freebies were already crazy delicious before the conf even began.

About a month before the conference, RobotsConf sent a sort of "care package" with a Spark Core, a Spark-branded Moleskine notebook (or possibly just a Spark-branded Moleskine-style notebook?), and a RobotsConf sticker.

At the conference, the swag bag contained a Pebble watch, an Electric Imp, an ARDX kit (Arduino, breadboard, speaker, dials, buttons, wires, resistors, and LEDs), a SumoBot kit (wheels, servos, body, etc.), a little black toolbox, a Ziplock bag with several AA batteries, and a RobotsConf beanie. There were a ton of stickers, of course, and you could also pick up a bright yellow Johnny Five shirt.

Many people embedded LEDs in the eyes of their RobotsConf hats, but I wasn't willing to risk it. I live in an area with actual snow these days, so I plan to get a lot of practical usefulness out of this hat.

Spark handed out additional free Spark Cores at the conference, so I actually came home with two Spark Cores. This means, in a sense, that I got five new computers out of this: the Pebble, the Arduino in the ARDX kit, the Electric Imp, and both Spark Cores. Really just microcontrollers, but still exciting. And of these five devices, the Pebble can connect to Bluetooth, while the Imp and Spark boxes can connect to WiFi.

Technical Machines didn't include Tessel microcontrollers in the swag bag, but they did set up a table and loan out a bunch of microcontrollers and modules to play with. I saw one developer code up a point-and-shoot camera in CoffeeScript, in about a minute. (The Tessel runs JavaScript, including most Node code.)

Likewise, although you couldn't take them home on the plane with you, there were a bunch of 3D printers you could experiment with. All in all, an amazing geeky playground. The only downside is that presented a tough act to follow, for Santa Claus (and/or Hanukkah Harry).

by Giles Bowkett ( at December 17, 2014 05:03 PM

December 12, 2014

Giles Bowkett

Nodevember: Make Art, Not Apps

I nearly went to this conference, but I'm aiming for a max of one conf per every two months (and even this schedule will likely let up after a while). Even kicked around the idea of sponsoring it, because it looked cool. Turns out, it was cool.

by Giles Bowkett ( at December 12, 2014 04:24 PM

December 10, 2014

Terry Jones

I go shopping for a compass, then my Sonos decides it needs one too

Screen Shot 2014-12-10 at 3.42.33 PMLast night I spent some time online looking to buy a compass. I looked at many of the Suunto models. Also yesterday I installed Little Snitch after noticing that an unknown gamed process wanted to establish a TCP/IP connection.

Anyway… a few minutes ago, 10 or 11 hours after I eventually bought a compass, a message pops up from Little Snitch telling me that the Mac OS X desktop Sonos app was trying to open a connection to See image on right (click for the full-sized version).


Can someone explain how that works? Either Little Snitch is mightily confused or…… Or what? Has the Sonos app has been digging around in my Chrome browser history or cache or cookie jar? Is Chrome somehow complicit locally? Or something with cookies (but that would require the Sonos to be accessing and sending cookies stored by Chrome). Or…. what?

And, why There’s an HTTP server there, but its / resource is not very informative:

$ curl -S -v
* Rebuilt URL to:
* Hostname was NOT found in DNS cache
*   Trying
* Connected to ( port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.37.1
> Host:
> Accept: */*
 HTTP/1.1 404 Not Found
* Server Apache is not blacklisted
 Server: Apache
 Content-Type: text/html; charset=iso-8859-1
 Date: Wed, 10 Dec 2014 16:02:47 GMT
 Content-Length: 16
 Connection: keep-alive

* Connection #0 to host left intact
File not found."⏎

Unfortunately, Little Snitch doesn't tell me the full URL that the Sonos app was trying to access.

Anyone care to speculate what's going on here?

by terry at December 10, 2014 04:12 PM

December 09, 2014

Giles Bowkett

RobotsConf 2014: Simple Hack with Leap Motion and Parrot Drone

RobotsConf was great fun, and oddly enough, it reminded me of Burning Man in two ways.

First, it was overwhelming, in a good way. Second, there was so much to do that the smart way to approach it is probably the same as the smart way to approach Burning Man: go without a plan the first year, make all kinds of crazy plans and projects every subsequent year.

The conf's split between a massive hackerspace, a roughly-as-big lecture space, and a small drone space. You can hop between any/all of these rooms, or sit at various tables outside all these rooms. The table space was like half hallway, half catering zone. Outside, there were more tables, and people also set up a rocket, a small servo-driven flamethrower (consisting of a Zippo and a can of hairspray), and a kiddie pool for robot boats.

I arrived with no specific plans, and spent most of the first day in lectures, learning the basics of electronics, robotics, and wearable tech. But I also took the time, that first day, to link up a Parrot AR drone with a Leap Motion controller.

Sorry Rubyists - the code on this one was all Node.js. Artoo has support for both the Leap Motion and the Parrot AR, but Node has embraced hardware hacking, where Ruby (except for Artoo) kinda remains stuck in 2010.

I started with this code, from the node-ar-drone README:

With this, I had the drone taking off into the air, wandering around for a bit, doing a backflip (or I think, more accurately, a sideflip), and then returning to land. Then I plugged in the Leap Motion, and, using Leap's Node library, it was very easy to establish basic control over the drone. The control was literally manual, in the classic sense of the term - I was controlling a flying robot with my hand.

Here's the code:

As you can see, it's very straightforward.

With this code, when you first put your hand over the Leap Motion, the drone takes off. If you hold your hand high above the Leap, the drone ascends; if you lower it, the drone descends. If you take your hand away completely, the drone lands.

The frame.gestures.forEach bit was frankly just a failure. I got no useful results from the gesture recognition whatsoever. I want to be fair to Leap, though, so I'll just point out that I hacked this whole thing together inside about twenty minutes. (Another caveat, though: that twenty minutes came after about an hour or so of utter puzzlement, which ended when I enabled "track web apps" in the Leap's settings, and I got a bunch of help on this from Andrew Stewart of the Hybrid Group.)

Anyway, I had nothing but easy sailing when it came to the stuff which tracks pointables and obtains their X, Y, and Z co-ordinates. I ran off to a lecture after I got this far, but it would be very easy to add turning based on X position, or forward movement based on Z position. If you read the code, you can probably even imagine how that would look. Also, if I'd had a bit more time, I think I probably could have synced flip animations to gestures.

In fact, I've lost the link, but I believe I saw that a pre-teen girl named Super Awesome Maker Sylvia did some of these things at RobotsConf last year, and a GitHub project already exists for controlling Parrot drones with Leap Motion controllers (it's a Node library, of course). There was a small but clearly thrilled contingent of young kids at RobotsConf, by the way, and it was pretty amazing to see all the stuff they were doing.

by Giles Bowkett ( at December 09, 2014 11:29 AM

Decyphering Glyph

Docker Dev to Prod in Just A Few Easy Steps

It seems that docker is all the rage these days. Docker has popularized a powerful paradigm for repeatable, isolated deployments of pretty much any application you can run on Linux. There are numerous highly sophisticated orchestration systems which can leverage Docker to deploy applications at massive scale. At the other end of the spectrum, there are quick ways to get started with automated deployment or orchestrated multi-container development environments.

When you're just getting started, this dazzling array of tools can be as bewildering as it is impressive.

A big part of the promise of docker is that you can build your app in a standard format on any computer, anywhere, and then run it. As puts it:

“... run the same app, unchanged, on laptops, data center VMs, and any cloud ...”

So when I started approaching docker, my first thought was: before I mess around with any of this deployment automation stuff, how do I just get an arbitrary docker container that I've built and tested on my laptop shipped into the cloud?

There are a few documented options that I came across, but they all had drawbacks, and didn't really make the ideal tradeoff for just starting out:

  1. I could push my image up to the public registry and then pull it down. While this works for me on open source projects, it doesn't really generalize.
  2. I could run my own registry on a server, and push it there. I can either run it plain-text and risk the unfortunate security implications that implies, deal with the administrative hassle of running my own certificate authority and propagating trust out to my deployment node, or spend money on a real TLS certificate. Since I'm just starting out, I don't want to deal with any of these hassles right away.
  3. I could re-run the build on every host where I intend to run the application. This is easy and repeatable, but unfortunately it means that I'm missing part of that great promise of docker - I'm running potentially subtly different images in development, test, and production.

I think I have figured out a fourth option that is super fast to get started with, as well as being reasonably secure.

What I have done is:

  1. run a local registry
  2. build an image locally - testing it until it works as desired
  3. push the image to that registry
  4. use SSH port forwarding to "pull" that image onto a cloud server, from my laptop

Before running the registry, you should set aside a persistent location for the registry's storage. Since I'm using boot2docker, I stuck this in my home directory, like so:

me@laptop$ mkdir -p ~/Documents/Docker/Registry

To run the registry, you need to do this:

me@laptop$ docker pull registry
Status: Image is up to date for registry:latest
me@laptop$ docker run --name registry --rm=true -p 5000:5000 \
    -e GUNICORN_OPTS=[--preload] \
    -e STORAGE_PATH=/registry \
    -v "$HOME/Documents/Docker/Registry:/registry" \

To briefly explain each of these arguments - --name is just there so I can quickly identify this as my registry container in docker ps and the like; --rm=true is there so that I don't create detritus from subsequent runs of this container, -p 5000:5000 exposes the registry to the docker host, -e GUNICORN_OPTS=[--preload] is a workaround for a small bug, STORAGE_PATH=/registry tells the registry to look in /registry for its images, and the -v option points /registry at the directory we previously created above.

It's important to understand that this registry container only needs to be running for the duration of the commands below. Spin it up, push and pull your images, and then you can just shut it down.

Next, you want to build your image, tagging it with localhost.localdomain.

me@laptop$ cd MyDockerApp
me@laptop$ docker build -t localhost.localdomain:5000/mydockerapp .

Assuming the image builds without incident, the next step is to send the image to your registry.

me@laptop$ docker push localhost.localdomain:5000/mydockerapp

Once that has completed, it's time to "pull" the image on your cloud machine, which - again, if you're using boot2docker, like me, can be done like so:

me@laptop$ ssh -t -R"$(boot2docker ip 2>/dev/null)":5000 \ \
    'docker pull localhost.localdomain:5000/mydockerapp'

If you're on Linux and simply running Docker on a local host, then you don't need the "boot2docker" command:

me@laptop$ ssh -t -R \ \
    'docker pull localhost.localdomain:5000/mydockerapp'

Finally, you can now run this image on your cloud server. You will of course need to decide on appropriate configuration options for your applications such as -p, -v, and -e:

me@laptop$ ssh \
    'docker run -d --restart=always --name=mydockerapp \
        -p ... -v ... -e ... \

To avoid network round trips, you can even run the previous two steps as a single command:

me@laptop$ ssh -t -R"$(boot2docker ip 2>/dev/null)":5000 \ \
    'docker pull localhost.localdomain:5000/mydockerapp && \
     docker run -d --restart=always --name=mydockerapp \
        -p ... -v ... -e ... \

I would not recommend setting up any intense production workloads this way; those orchestration tools I mentioned at the beginning of this article exist for a reason, and if you need to manage a cluster of servers you should probably take the time to learn how to set up and manage one of them.

However, as far as I know, there's also nothing wrong with putting your application into production this way. If you have a simple single-container application, then this is a reasonably robust way to run it: the docker daemon will take care of restarting it if your machine crashes, and running this command again (with a docker rm -f mydockerapp before docker run) will re-deploy it in a clean, reproducible way.

So if you're getting started exploring docker and you're not sure how to get a couple of apps up and running just to give it a spin, hopefully this can set you on your way quickly!

(Many thanks to my employer, Rackspace, for sponsoring the time for me to write this post. Thanks also to Jean-Paul Calderone, Alex Gaynor, and Julian Berman for their thoughtful review. Any errors are surely my own.)

by Glyph at December 09, 2014 02:14 AM

December 04, 2014

Giles Bowkett

Questions Worth Asking

What is programming?

How do you choose an open source project?

What are the consequences of open source?

by Giles Bowkett ( at December 04, 2014 07:20 PM

December 03, 2014

Greg Linden

More quick links

More of what caught my attention lately:
  • "Make infinite computing resources available internally ... Give teams the resources they need to experiment ... All employees should be limited only by their ability rather than an absence of resources or an inability to argue convincingly for more." ([1] [2])

  • "Accept that failures will always happen and guard ... [against] cascading failures by purposefully causing failures" ([1] [2])

  • "The importance of Netflix’s recommendation engine is actually underestimated" ([1] [2])

  • Courts are getting more skeptical about software patents ([1])

  • Nice way of putting it: "The prevailing business culture in the banking industry weakens and undermines the honesty norm" ([1] [2])

  • "[On] the overcrowded, overstuffed, slow-loading web, you are bound to see a carnival of pop-ups and interstitials — interim ad pages served up before or after your desired content — and scammy come-ons daring you to click. Is it any wonder, really, that this place is dying?" ([1])

  • A very effective social engineering attack "compromised the accounts of C-level executives, legal counsel, regulatory and compliance personnel, scientists, and advisors of more than 100 [major] companies" ([1])

  • An 11 hour Microsoft Azure cloud service outage that impacted just about everyone using it worldwide, including internal users like and Xbox Live ([1])

  • Stack traces at arbitrary break points in Google's cloud services running live with near zero overhead ([1] [2])

  • Free SSL certificates (for HTTPS) from a non-profit out of EFF, Mozilla, Cisco, and Akamai ([1])

  • The journal Nature makes its papers free for everyone to read ([1] [2])

  • Combining neural networks like components yields new breakthroughs ([1] [2])

  • Robotics guru Rodney Brooks says, "Relax. Chill ... [The press has a] misunderstanding of how far we really are from having volitional or intentional artificially intelligent beings." ([1])

  • Undersea drones are enabling new feats: "The first time ... the black sea devil anglerfish ... has been filmed alive and in its natural habitat" ([1])

  • Bats jam the sonor of other bats when they're both trying to catch the same insect. It's like a dogfight up there. ([1])

  • Great tutorial on CSS and HTML just launched by Khan Academy and jQuery's John Resig ([1])

  • Fun visualization of the periodic table by how common the elements are in the earth's crust, ocean, human body, and sun ([1])

  • Hilarious parody of the Amazon Echo promotional video ([1])

  • South Park has a surprisingly good (and funny) criticism of freemium games that gets all the issues correct around preying on people with a tendency toward compulsive gambling ([1] [2])

  • Great Dilbert comic on how engineers think of marketing ([1])

  • Good Xkcd comic on over-optimization ([1])

  • Loved this SMBC comic: "He said I wasn't very good at math" ([1]) 

by Greg Linden ( at December 03, 2014 05:50 PM

December 02, 2014

Alarming Development

New Subtext screencast

We’ve published the final videos from the Future Programming Workshop. We will also be publishing a final report about our experiences and lessons from the workshop.

Included in the videos is my latest screencast about Subtext: Two-way Dataflow. The abstract:

Subtext is an experiment to radically simplify application programming. The goal is to combine the power of frameworks like Rails and iOS with the simplicity of a spreadsheet. The standard MVC architecture of such frameworks makes execution order hard to understand, a problem colloquially called callback hell. I propose a new approach called two-way dataflow, which breaks the program into cyclic output and input phases. Output is handled with traditional one-way dataflow, which is realized here as a form of pure lazy functional programming. Input is governed by a new semantics called one-way action which is a highly restricted form of event-driven imperative programming. These restrictions statically order event execution to avoid callback hell. Two-way dataflow has been designed not only to simplify the semantics of application programming but also to support a presentation that, like a spreadsheet, provides a fully WYSIWYG programming experience.

Comments welcome.

by Jonathan Edwards at December 02, 2014 03:51 PM

Giles Bowkett

Two Recent Animations

I've been studying animation recently. As we're approaching the end of the semester, I've had to turn in my final projects. Here they are.

For this one, I created the video in Cinema 4D and Adobe After Effects, and I made the music with Ableton Live. Caveat: it looks better on my machine. YouTube's compression has not been as kind as I would have hoped.

The assignment for this one was to create an intro credits sequence for an existing film, and I chose Scott Pilgrim vs. The World. The film's based on a series of graphic novels, which I own, so I scanned images from the comics, tweaked them in Photoshop, and added color by animating brush strokes in After Effects.

The soundtrack's a song called "Scott Pilgrim," by the Canadian band Plumtree. The author of the comics named the character after the song.

I figured I had aced the basic skill of coloring in a black-and-white image back when I was five, with crayons, but it was actually an arduous process. If you count brush stroke effects as layers, two comps in this animation had over 300 layers.

Ironically, I picked this approach because I had limited time. I had to turn in both projects a little early. On the last day of class, I'll be on an airplane back from RobotsConf. I have to give a big shout-out to my employer, Panda Strike, not just for sending me to this awesome conference, which I'm very excited about, but also for being the kind of company which believes in flexible scheduling and remote work. Without flexible scheduling and remote work, I would have a much harder time studying animation.

by Giles Bowkett ( at December 02, 2014 10:56 AM

November 30, 2014

Giles Bowkett

Rest In Peace Ezra Zygmuntowicz

Ezra Zygmuntowicz, creator of Merb and co-founder of Engine Yard, has passed away.

Some notes from Hacker News:

"He was funny, patient, and most of all kind."

"We [left Engine Yard for] our own colo but Ezra helped us at every step of the way, long after it was clear we weren't coming back."

From @antirez: "Ezra was the first to start making Redis popular..."

"Ezra was a innovator in the glass pipe world. A world class artist that reinvented lampworking."

"He used to fly little radio controlled helicopters all over our office at Engine Yard."

I had a conference call with Ezra in 2006 as part of a project and was a total fanboy about it. His work on the Yakima Herald, before he founded Engine Yard, was one of the main things that made me start taking Rails seriously back in those days, back before the hype train really even began. One time we shared a car to the airport after a conference, and of course I saw him at a ton of conferences beyond that as well. He was a very cool guy, and he'll be missed.

by Giles Bowkett ( at November 30, 2014 11:39 AM

November 29, 2014


On Gaming And Media Narratives

On December 13, 2013, I sent the following email to several of my friends who play videogames:

Hey, if you’re receiving this it’s because you’re on my Steam friends list. I don’t send spam out often but right now I am frustrated with the collective hatred of the internet and this is the only way I can think of fighting back.

Earlier this year, I played a web-based game called Depression Quest. It’s not particularly “fun”, because it’s about depression, but it is very good at building awareness about, and empathy for, a serious mental condition.

The creator happens to be a woman and has been harassed by the internet. The game, while free, is trying to get on Steam and a bunch of internet assholes are down-voting the game because misogyny.

So, if you either like the premise of the game or despise misogyny (or both!), I encourage you to vote for the game on Steam Greenlight using the link below:

That is all. Thanks for reading this, and apologies if this is spam to you.

The above email was the only “mass email” I’ve sent in at least the past two years. I was pretty frustrated at the time.

What’s odd, though, is that I didn’t do a whole lot of research before writing that email: I followed Depression Quest’s author, Zoe Quinn, on Twitter, and saw her complaining about being harassed over the phone, and I saw a post or two from a few game journalism sites about it, which convinced me that a mob of angry misogynists were harassing her.

The truth, as far as I can tell, is extremely murky. There’s a YouTube video from Some Asian Guy (literally his username) who paints an unflattering picture of an extremely manipulative Zoe Quinn taking 2 angry posts from a website for depressed, suicidal male virgins, framing them as harassment, entirely fabricating claims of phone calls and “raids” on her, and then getting her game journalist friends to write about it as a mechanism to garner sympathy and publicity for herself and her game on Steam Greenlight. The whole story is also curiously documented in a series of very large images with colorful text on imgur.

What’s interesting, though, is that this isn’t an isolated incident. Others have taken place over the past year; many in the gaming community seem to constantly be accusing Quinn of using bullying tactics to sabotage competitors’ projects, claiming harassment when none has occurred, and taking advantage of the media’s sympathy towards women in gaming for personal gain.

Rhetorically, it’s difficult to question and investigate Quinn’s claims of harassment because doing so is often interpreted as victim blaming. But if this were mere sexist victim blaming, it would make sense for all women who claim harassment in the video game industry to be constantly doubted, not just Quinn. And yet this doesn’t appear to be the case: for example, 2012′s #1reasonwhy hashtag, which was used to document the rampant sexism women in the game industry face, didn’t receive significant push-back from the gaming community. Indeed, if what is said about Quinn is true, in some ways #1reasonwhy set the stage for Quinn to pull it off, since her claims of harassment fit perfectly with the media narrative the hashtag established.

And that’s what concerns me about all this: the dominant media narrative here is that when gamers meet women, misogyny happens, and that’s it. Every major gaming outlet refused to acknowledge claims against Quinn, and indeed always jumped to her defense, framing the story in the dominant narrative.

This narrative is so pervasive that even my favorite podcast that analyzes journalistic coverage, On The Media, is unable to see through it. Further evidence implicating Quinn in affairs with game journalists was released in August and ignited a movement called GamerGate, but coverage was yet again framed in the dominant narrative: righteous outrage shaming gamers for being misogynistic.

To be clear, GamerGate is about much more than Zoe Quinn; it’s about the gaping chasm that has grown between the gaming community and game journalism over the past few years, which Erik Kain at Forbes does an incredible job at covering. Sadly, though, he’s one of a small handful; On The Media only interviewed Chris Grant, the editor-in-chief of Polygon, who is not part of said handful. In fact, he is a member of an exclusive mailing list that many in GamerGate feel is partly responsible for the widening of the chasm.

Because the causes of the chasm are vast and varied, so too are the people and perspectives that comprise GamerGate. And yes, some of them are people who fit the media narrative perfectly: disgruntled men who dislike women in gaming, don’t like games like Depression Quest and will bully people about it.

However, GamerGate is also comprised of thoughtful, compassionate people. People who are alarmed by the one-sided, uncritical nature of the dominant media narrative and want to see it changed. Parents of autistic children who feel the gaming press unfairly vilifies social awkwardness. Individuals who don’t believe it’s ethical for game journalists to be wined-and-dined by the people who made the games they’re reviewing. People who have been harassed and doxxed by anti-GamerGaters without provocation. People who hate harassment and un-dox victims on both sides.

I have no idea what the truth is. But I’m certain it isn’t as simple as a giant mob of angry misogynists harassing women in gaming, and I wish a few more media outlets than Forbes would start acknowledging that.

by Atul at November 29, 2014 12:44 PM

November 28, 2014

Decyphering Glyph

Public or Private?

If I am creating a new feature in library code, I have two choices with the implementation details: I can make them public - that is, exposed to application code - or I can make them private - that is, for use only within the library.

If I make them public, then the structure of my library is very clear to its clients. Testing is easy, because the public structures may be manipulated and replaced as the tests dictate. Inspection is easy, because all the data is exposed for clients to manipulate. Developers are happy when they can manipulate and test things easily. If I select "public" as the general rule, then developers using my library will be happy, because they'll be able to inspect and test everything quite easily whether I specifically designed in support for that or not.

However, now that they're public, I have to support them in their current form into the forseeable future. Since I tend to maintain the libraries I work on, and maintenance necessarily means change, a large public API surface means a lot of ongoing changes to exposed functionality, which means a constant stream of deprecation warnings and deprecated feature removals. Without private implementation details, there's no axis on which I can change my software without deprecating older versions. Developers hate keeping up with deprecation warnings or having their applications break when a new version of a library comes out, so if I adopt "public" as the general rule, developers will be unhappy.

If I make them private, then the structure of my library is a lot easier to understand by developers, because the API surface is much smaller, and exposes only the minimum necessary to accomplish their tasks. Because the implementation details are private, when I maintain the library, I can add functionality "for free" and make internal changes without requiring any additional work from developers. Developers like it when you don't waste their time with trivia and make it easier to get to what they want right away, and they love getting stuff "for free", so if I adopt "private" as the general rule, developers will be happy.

However, now that they're private, there's potentially no way to access that functionality for unforseen use-cases, and testing and inspection may be difficult unless the functionality in question was designed with an above-average level of care. Since most functionality is, by definition, designed with an average level of care, that means that there will inevitably be gaps in these meta-level tools until the functionality has already been in use for a while, which means that developers will need to report bugs and wait for new releases. Developers don't like waiting for the next release cycle to get access to functionality that they need to get work done right now, so if I adopt "private" as the general rule, developers will be unhappy.


by Glyph at November 28, 2014 11:25 PM

November 26, 2014

Lambda the Ultimate

John C Reynolds Doctoral Dissertation Award nominations for 2014

Presented annually to the author of the outstanding doctoral dissertation in the area of Programming Languages. The award includes a prize of $1,000. The winner can choose to receive the award at ICFP, OOPSLA, POPL, or PLDI.

I guess it is fairly obvious why professors should propose their students (the deadline is January 4th 2015). Newly minted PhD should, for similar reasons, make sure their professors are reminded of these reasons. I can tell you that the competition is going to be tough this year; but hey, you didn't go into programming language theory thinking it is going to be easy, did you?

November 26, 2014 10:05 PM

November 22, 2014

Lambda the Ultimate

Zélus : A Synchronous Language with ODEs

Zélus : A Synchronous Language with ODEs
Timothy Bourke, Marc Pouzet

Zélus is a new programming language for modeling systems that mix discrete logical time and continuous time behaviors. From a user's perspective, its main originality is to extend an existing Lustre-like synchronous language with Ordinary Differential Equations (ODEs). The extension is conservative: any synchronous program expressed as data-flow equations and hierarchical automata can be composed arbitrarily with ODEs in the same source code.

A dedicated type system and causality analysis ensure that all discrete changes are aligned with zero-crossing events so that no side effects or discontinuities occur during integration. Programs are statically scheduled and translated into sequential code that, by construction, runs in bounded time and space. Compilation is effected by source-to-source translation into a small synchronous subset which is processed by a standard synchronous compiler architecture. The resultant code is paired with an off-the-shelf numeric solver.

We show that it is possible to build a modeler for explicit hybrid systems à la Simulink/Stateflow on top of an existing synchronous language, using it both as a semantic basis and as a target for code generation.

Synchronous programming languages (à la Lucid Synchrone) are language designs for reactive systems with discrete time. Zélus extends them gracefully to hybrid discrete/continuous systems, to interact with the physical world, or simulate it -- while preserving their strong semantic qualities.

The paper is short (6 pages) and centered around examples rather than the theory -- I enjoyed it. Not being familiar with the domain, I was unsure what the "zero-crossings" mentioned in the introductions are, but there is a good explanation further down in the paper:

The standard way to detect events in a numeric solver is via zero-crossings where a solver monitors expressions for changes in sign and then, if they are detected, searches for a more precise instant of crossing.

The Zélus website has a 'publications' page with more advanced material, and an 'examples' page with case studies.

November 22, 2014 06:31 PM

November 19, 2014

Giles Bowkett

Text Animators

Another quick animation in After Effects, with original music by me.

by Giles Bowkett ( at November 19, 2014 12:13 PM

November 15, 2014


Resizing, Moving, Snapping Windows with JS & CSS

Imagine you have a widget written in CSS, how would you add some code so it would get some ability to resize itself? The behaviour is so ingrained with our present windows managers or GUI that its quite easily taken for granted. While, it’s quite possible that there might be some plugins or framework which does this, but the challenge I gave myself was to do it in vanilla javascript, and to handle the resizing without adding more divs to the dom. (I thought adding additional divs to use as a draggable bar is pretty common).

Past work
Which reminds me, I wanted this similar behaviour for ThreeInspector, and while hacking the idea, I went with the approach of using the CSS3 resize property for the widget. The unfortunate thing was that min-width and min-height was broken for a really long time in webkit (the bug was filed long ago in 2011, and I’m not entirely sure what the status is now). Being bitten by the bug, I become hesitant every time I think of the css3 resize approach.

Screenshot 2014-11-15 09.31.08

JS Resizing
So, for own my challenge, I start with a single purple div and add a bit of js.


Done within 100 lines of code, this turns out not to be difficult. The trick is adding a mousemouse handler to document (not document.body, as it fails in FF), and calculate when the mouse is within a margin from the edge of the div. Another reason to always add handlers to document instead of a target div is when you need mouse events even if the cursor moves out of the defined boundary. This is useful for dragging and resizing behaviours, and especially in resizing, you wouldn’t want to waste time hunting bugs because the events and divs resizing are not in sync.

Also for my first time, I made extensive use of document’s event.clientX, event.clientY, together with div.getBoundingClientRect(). It does get me almost everything I need to deal for handling positions, size and events, although it’s a possibility that getBoundClientRect might not be as performant as getting offsets.

What’s nice about using JS vs a pure CSS3-resize is that you get to decide which sides of the div you wish to allow resizing. I went for the 4 sides and 4 corners, and the fun just started, so next I started implementing moving.

Handling basic moving / dragging just needs a few more lines of code. Pseduocode: Mousedown, check that cursor isn’t on the edge (reserved for resizing), store where the cursor and the bounds of the box is. Mousemove, update the box’s position.

Still simple, so let’s try the next challenge of snapping the box to the edges.

Despite the bad things Mac might say to PC, one thing that is pretty good since Windows 7 is its snap feature. On my mac I use Spectacle, which is the replacement for Window’s windows docking management. I took inspiration of this feature in Windows and implemented this with JS and CSS.


One sweet detail in Snap is the way a translucent window shows where the window would dock or snap into place before you release your mouse. So in my implementation, I used an additional div with a slight transparency one z-index lower than the div I’m dragging. Css animation transition property was used for a more organic experience.

There’s slight deviations to the actual Aero’s experience that Windows users may notice. In Windows, dragging a window to the top snaps the window full screen, while dragging the window to the bottom of the screen has no effect. In my implementation, the window can be docked to the upper half or lower half, or the fullscreen if the window get dragged further beyond the edge of the screen. In Windows, a vertical half is only possible with the keyboard shortcut.

Another difference is that Windows snaps happen when the cursor touch the edge of the screen. My implementation snaps when the div’s edge touches the browser window edge. I thought this might be a better, because users typically use less movements for non-operating-system gesutures. One last difference is that Windows’ implementation sends tiny ripples at the point the cursor touches the screen. Ripples are nice (I noticed they are an element used frequently in Material Design), but I’ll leave it to be an exercise for another time.

As after thoughts, I added touch support and limit mousemove updates to requestAnimationFrame. Here’s the demo, feel free to try and check out the code on codepen.

See the Pen Resize, Drag, Snap by zz85 (@zz85) on CodePen.

by Zz85 at November 15, 2014 01:47 AM

November 05, 2014

Giles Bowkett

GlideRoom: Hangouts Without The Hangups

Many years ago, a friend of mine took a picture of me with a dildo stuck to my face.

The worst thing about this picture is that I didn't have it on my hard drive. I found it via Google Images. But the best part is it took me at least 3 minutes to find it, so, by modern standards, it's pretty obscure. Or at least it was, until I put it here on my blog again.

(Actually, the worst thing about this image is that I just found out it's now apparently being used to advertise porn sites, without my knowledge, consent, or participation.)

Anyway, back in the day, this picture went on Myspace, because of course it did. And eventually the friend who took this picture became a kindergarten teacher, while I became a ridiculously overrated blogger. That's not just an opinion, it's a matter of fact, because Google over-emphasizes the importance of programmer content, relative to literally any other kind of content, when it computes its search rankings. And so, through the "magic" of Google, the first search result for my former photographer's name - and she was by this point a kindergarten teacher - was this picture on my Myspace page.

She emailed me like, "Hi! It's been a while. Can you take that picture down?"

And of course, the answer was no, because I hadn't used Myspace in years, and I didn't have any idea what my password was, and I didn't have the same email address any more, and I didn't even have the computer I had back when Myspace existed. Except it turned out that Myspace was still existing for some reason, and was maybe causing some headaches for my friend as well. I have to tell you, if you're worried that you might have accidentally fucked up your friend's career in a serious way, all because you thought it would be funny to strap a dildo to your face, it doesn't feel awesome.

(And by the way, I'm pretty sure she's a great teacher. You shouldn't have to worry that some silly thing you did as a young adult, or in your late teens, would still haunt you five to fifteen years later, but that's the Internet we built by accident.)

So I went hunting on Myspace for how to take a picture down for an account you forgot you had, and Myspace was like, "Dude, no problem! Just tell us where you lived when you had that account, and what your email address was, and what made-up bullshit answers you gave us for our security questions, since nobody in their right minds would ever provide accurate answers to those questions if they understood anything at all about the Internet!"

So that didn't go so well, either. I didn't know the answers to any of those questions. I didn't have the email address any more, and I had no idea what my old physical address was. I would have a hard time figuring out what my current address is. Probably, if I needed to know that, I might be able to find it in Gmail. That's certainly where I would turn first, because Google has eaten my ability to remember things and left me a semi-brainless husk, as most of you know, because it's done the same thing to you, and your friends, and your family.

Speak of the devil - around this time, Google started pressuring everybody in the fucking universe to sign up for Google Plus, Larry Page's desperate bid to turn Google into Facebook, because who on earth would ever be content to be one of the richest people in the history of creation, if Valleywag stopped paying attention to you for five whole minutes?

My reaction when Google's constantly like, "Hey Giles, you should join Google Plus!"

Since then, my photographer/teacher friend fortunately figured out a different way to get the image off Myspace, and I made it a rule to avoid Google Plus. Having had such a negative experience with Myspace, I took the position that any social network you join creates presence debt, like the technical debt incurred by legacy code - the nasty, counterproductive residue of a previous identity. So I was like, fuck Google Plus. I lasted for years without joining that horrible thing, but I finally capitulated this summer. I joined a company called Panda Strike, and a lot of us work remote (myself included), so we periodically gather via Google Hangouts to chat and convene as a group.

But just because I had consented to use Hangouts, that didn't mean I was going down without a fight.

When I "joined" Google Plus, I first opened up an Incognito window in Chrome. Then I made up a fake person with fake biographical attributes and joined as that person. Thereafter, whenever I saw a Google Hangouts link in IRC or email, I would first open up an Incognito window, then log into Google Plus "in disguise," and then copy/paste the Hangouts URL into the Incognito window's location textfield, and then - and only then - enter the actual Hangout.

This is, of course, too much fucking work. But at least it's work I've created for myself. Plenty of people who are willing to go along with Google's bullying approach to selling Google Plus still get nothing but trouble when they try to use Hangouts.

Protip: don't even tolerate this bullshit.

Imagine how amazing it would be if all you needed to join a live, ongoing video chat was a URL. No username, no password, no second-rate social network you've been strong-armed into joining (or pretending to join). Just a link. You click it, you're in the chat room, you're done.

Panda Strike has built this site. It's called GlideRoom, and it's Google Hangouts without the hangups, or the hassle, or indeed the shiny, happy dystopia.

Clicking "Get A Room" takes you to a chat room, whose URL is a unique hash. All you do to invite people to your chat room is send them the URL. You don't need to authorize them, authenticate them, invite them to a social network which has no other appealing features (and plenty of unappealing ones), or jump through any other ridiculous hoops.

We built this, of course, to scratch our own itch. We built this because URLs are an incredibly valuable form of user interface. And yes, we built it because Google Plus is so utterly bloody awful that we truly expect its absence to be a big plus for our product.

So check out Glideroom, and tweet at me or the team to let us know how you like it.

by Giles Bowkett ( at November 05, 2014 03:39 PM

November 03, 2014

Greg Linden

Quick links

What has caught my attention recently:
  • Netflix says the value of its recommendations algorithms is $500M/year ([1])

  • Details on the internals of LinkedIn's recommender system ([1])

  • Fantastic list of some hard and interesting big data problems at Facebook ([1] [2])

  • Google Glass may target "'superhero vision', like seeing in the dark, or magnifying subtle motion or changes" ([1] [2])

  • A claim that Amazon's cloud revenue is $4.7B this year, supposedly x30 bigger than Microsoft ($156M) and x70 Google's ($66M) ([1])

  • "We have a 10 petabyte data warehouse on S3" ([1])

  • Google's Eric Schmidt says, "Our biggest search competitor is Amazon" ([1])

  • Apple was and still is almost entirely an iPhone company ([1])

  • Tablet sales are projected to be flat now, and the growth boom for tablets appears to be done ([1])

  • But, it's interesting that specialized, expensive, and often poorly done custom hardware is getting replaced with a cheap touchscreen tablet ([1])

  • So far, it doesn't look like Windows 10 is going to fix what was wrong with Windows 8 ([1])

  • What? "Microsoft loves Linux" ([1] [2])

  • Delivery startups are back: "Silicon Valley wants to save you from ever having to leave your couch. Will it work this time around?" ([1])

  • Despite the difficulty older adults have with tiny mobile keyboards, older adults and seniors don't use voice search much ([1])

  • Speculation that hardware to enable gesture control on mobile phones will be widespread on new phones next year ([1])

  • A claim that "solar will soon reach price parity with conventional electricity in well over half the nation: 36 states" ([1])

  • "HP’s Multi Jet Fusion printer can crank out objects 10 times faster than any machine that’s on the market today ... 3D print heads that can operate 10,000 nozzles at once, while tracking designs to a five-micron precision." ([1] [2])

  • Is biology about to be transformed by the use of many drones to gather lots of data? ([1] [2])

  • More evidence that some of the best innovations come from combining ideas from two very separate fields ([1])

  • "Every success in AI redefines it. But we haven't just been redefining what we mean by AI-we've been redefining what it means to be human [and intelligent]." ([1])

  • "China is merely regaining a title that it has held for much of recorded history" ([1])

  • Funny Dilbert comic on multitasking and checking e-mail too often ([1])

  • The Onion: "This already vanishing glimmer of pleasure is exactly what we've come to expect from Apple" ([1])

  • Great SMBC comic: "The humans aren't doing what the math says. The humans must be broken." ([1])

by Greg Linden ( at November 03, 2014 07:52 PM

October 25, 2014

Greg Linden

At what point is an over-the-air TV antenna too long to be legal?

You can get over-the-air HDTV signals using an antenna. This antenna gets a better, stronger signal with less interference if it is direct line-of-sight and as near as possible to the broadcast towers. So, you might want an antenna that is up high or even some distance away to get the best signal.

But if you try to do this, you immediately run into a question: At what point does that antenna become too long to be legal or the signal from the antenna is transmitted in a way where it is no longer legal?

Let's say I put an antenna behind my TV hooked up with a wire. That's obviously legal and what many people currently do.

Let's say I put an antenna outside on top of a tree or my garage and run a wire inside. Still seems obviously legal.

Let's say I put an antenna on top of my roof. Still clearly fine.

Let's say I put it on my neighbor's roof and run a wire to my TV. Still ok?

Let's say I put the antenna on my neighbor's roof, but have the antenna connect to my WiFi network and transmit the signal using my local area network instead of using a direct wired cable connection. Still ok?

Let's say I put the antenna on my neighbor's roof, but have the antenna connect to my neighbor's WiFi network and transmit the signal over their WiFi, over the internet, then to my WiFi, instead of using a direct wired cable connection. Still ok?

Let's say I put my antenna on my neighbor's roof, but my neighbor won't do this for free. I have to pay a small amount of rent to my neighbor for the space on his roof used by my antenna. I also have the antenna connect to my neighbor's WiFi network and transmit its signal over their WiFi, over the internet, then to my WiFi, instead of using a direct wired cable connection. Still ok?

Let's say, like before, I put my antenna on my neighbor's roof, pay the neighbor rent for the space on his roof, use the internet to transmit the antenna's signal. But, this time, I buy the antenna from my neighbor at the beginning (and, like before, I own it now). Is that okay?

Let's say I put my antenna on my neighbor's roof, pay the neighbor rent for the space on his roof, use the internet to transmit the antenna's signal, but now I rent or lease the antenna from my neighbor. Still ok? If this is not ok, which part is not ok? Is it suddenly ok if I replace the internet connection with a direct microwave relay or hardwired connection?

Let's say I do all of the last one, but use a neighbor's roof three houses away. Still ok?

Let's say I do all of the last one, but use a roof on a building five blocks away. Still ok?

Let's say I rent an antenna on top of a skyscraper in downtown Seattle and have the signal sent to me over the internet. Not ok?

The Supreme Court recently ruled Aereo is illegal. Aereo put small antennas in a building and rented them to people. The only thing they did beyond the last thing above is time-shifting, so they would not necessary send the signal from the antenna immediately, but instead store it, and only transmit it when demanded.

You might think it's the time shifting that's the problem, but that didn't seem to be what the Supreme Court said. Rather, they said the intent of the 1976 amendments to US copyright law prohibit community antennas (which is one antenna that sends its signal to multiple homes), labelling those a "public performance". They said Aereo's system was similar in function to a community antenna, despite actually having multiple antennas, and violated the intent of the 1976 law.

So, the question is, where is the line? Where does my antenna become too distant, transmit using the wrong methods, or involve too many payments to third parties in the operation of the antenna that it becomes illegal? Can it not be longer than X meters? Not transmit its signal in particular ways? Not require rent for the equipment or space on which the antenna sits? Not store the signal at the antenna and transmit it only on demand? What is the line?

I think this question is interesting for two reasons. First, as an individual, I would love to have a personal-use over-the-air HDTV antenna that gets a much better reception than the obstructed and inefficient placement behind my TV, but I don't know at what point it becomes illegal for me to place an antenna far away from the TV. Second, I suspect many others would like a better signal from their HDTV antenna too, and I'd love to see a startup (or any group) that helped people set up these antennas, but it is very unclear what it might be legal for a startup to do.


by Greg Linden ( at October 25, 2014 08:40 AM

October 24, 2014

Greg Linden

Why can't I buy a solar panel somewhere else in the US and get a credit for the electricity from it?

Seattle City Light has a clever project where, instead of installing solar panels on your house where they might be obscured by trees or buildings, you can buy into a solar panel installation on top of a building in a more efficient location and get a credit for the electricity generated on your electric bill.

Why stop there? Why can't I buy a solar panel in a very different location and get the electricity from it?

Phoenix, Arizona has about twice the solar energy efficiency of Seattle. Why can't I buy a solar panel and enjoy the electricity credit from that solar panel when it is installed in a nice sunny spot in the Southwest?

This doesn't require shipping the actual electricity to your home. Instead, you fund an installation of solar panels on top of a building in an area of the US with high solar energy efficiency, then get a credit for that electricity on your monthly electricity bill.

I suppose, at some boring financing level, this starts to resemble a corporate bond, with an initial payment yielding a stream of payments over time, but people wouldn't see it that way. The attraction would be installing solar panels and getting a credit on your energy bill without installing solar panels on your own home. Perhaps the firm arranging the installations and working out the deals with local utilities could be treating the entire thing as the equivalent of marketing bonds to people who like solar energy, but the attraction to people is that visceral appeal of a near $0 electricity bill they see every month from the solar panels they feel like they own and installed.

Even with the overhead pulled out by the company selling this and arranging deals with local utilities so this all appears on your local electricity bill, the credit on your electricity bill still should be much higher than you could possibly get installing panels on your own home with all its obstructions and cloudy weather. Solar generation in an ideal location in the US easily can generate twice as much power as what is available locally, on your rooftop.

So, why hasn't someone done this? Why can't I buy solar panels and have them installed not on my own home, but in some much better spot?

by Greg Linden ( at October 24, 2014 07:48 AM

October 16, 2014

Blue Sky on Mars


Traffic jams.

They're hilarious, really. Humans tucked neatly single-file in their giant aluminum boxes, waiting patiently for three lights to alternate colors long enough to transcend into the heaven that is the next block closer to their destination.

Ever hop in a cab or a bus and get stuck in a traffic jam? How many times have you decided fuck it, I'm just going to get out and walk — even if the walk is going to take you longer than the wait in traffic? And yet you do it anyway... because you're going places, dammit. You’re important. Your hair looks fantastic today. You’re not the type to tepidly stand still in one place.

Inertia is a powerful concept. When you're moving, you're moving. When you're stuck, well, you're stuck. It requires more energy to get the ball rolling.

Shipping is contagious

Startups have inertia. If all your coworkers are building great things, you feel a compulsion to push forward so you can be proud about shipping your piece of the puzzle, too. This is why when things are good at a company, they tend to be great. Sure, you’re facing some problems, but it feels like you’re all in this river together and you’re flying downstream at a pace that can’t ever stop.

When things are bad at a company, it feels horrific. You’re swimming against the current instead of with it. Even trivial stumbling blocks can loom as insurmountable obstacles when you don’t have that flow going.

It’s because of inertia: it’s hard to get that ball rolling again if you’re stopped.

Inertia is gravitational

A huge part of this is pace. There’s a lot to be said about moving fast and breaking nothing, specifically around testing hypotheses and evaluating the market, but the most important part of the philosophy is that you’re continually making visible progress.

As usual, if it's worth saying, @rands has probably already said it:

I think of boredom as a clock. Every second that someone on my team is bored, a second passes on this clock. After some aggregated amount of seconds that varies for every person, they look at the time, throw up their arms, and quit.

— Michael Lopp, Bored People Quit

Good people gravitate towards this progress. It keeps the gig interesting, since everything’s constantly evolving and you can see the effects of that. There’s certainly a company in which your people find that inertia... whether it ends up being your company or not is the real question.

Swinging for the fences

As much as constant incremental improvement is important, so is taking the big risks. To borrow a baseball analogy: walks will best improve your team’s chances of winning, but homers will keep the fans coming back for more. You need a balance of both.

People just don’t get excited about the small stuff. I mean, they’re good, but they don’t rally your base. It doesn’t feel like you’re adding inertia… it just feels like you’re maintaining.

Good people don’t stay for maintenance.

Taking on new risk keeps things fresh. It lets your people experiment with new approaches, and with different and complex problems. You can even fail, and that's great: you can learn from that failure and try again. Taking on big risk is just a way to prime the pump to get the current flowing again.

Bored people quit, unremarkable companies maintain, and good companies keep their inertia.

October 16, 2014 12:00 AM

October 14, 2014

Giles Bowkett

How Much Of "Software Engineering" Is Engineering?

When you build some new piece of technology, you're (arguably) doing engineering. But once you release it into the big wide world, its path of adoption is organic, and sometimes full of surprises.

Quoting Kevin Kelly's simultaneously awesome and awful book What Technology Wants, which I reviewed a couple days ago:

Thomas Edison believed his phonograph would be used primarily to record the last-minute bequests of the dying. The radio was funded by early backers who believed it would be the ideal device for delivering sermons to rural farmers. Viagra was clinically tested as a drug for heart disease. The internet was invented as a disaster-proof communications backup...technologies don't know what they want to be when they grow up.

When a new technology migrates from its intended use case, and thrives instead on an unintended use case, you have something like the runaway successes of invasive species.

In programming, whether you say "best tool for the job" or advocate your favorite One True Language™, you have an astounding number of different languages and frameworks available to build any given application, and their distribution is not uniform. Some solutions spread like wildfire, while others occupy smaller niches within smaller ecosystems.

In this way, evaluating the merits of different tools is a bit like being an exobiologist on a strange planet made of code. Why did the Ruby strain of Smalltalk proliferate, while the IBM strain died out? Oh, because the Ruby strain could thrive in the Unix ecosystem, while the IBM strain was isolated and confined within a much smaller habitat.

However, sometimes understanding technology is much more a combination of archaeology and linguistics.

Go into your shell and type man 7 re_format.

Regular expressions (``REs''), as defined in IEEE Std 1003.2 (``POSIX.2''), come in two forms: modern REs (roughly those of egrep(1); 1003.2 calls these ``extended'' REs) and obsolete REs (roughly those of ed(1); 1003.2 ``basic'' REs). Obsolete REs mostly exist for backward compatibility in some old programs; they will be discussed at the end.

This man page, found on every OS X machine, every modern Linux server, and probably every iOS or Android device, describes the "modern" regular expressions format, standardized in 1988 and first introduced in 1979. "Modern" regular expressions are not modern at all. Similarly, "obsolete" regular expressions are not obsolete, either; staggering numbers of people use them every day in the context of the find and grep commands, for instance.

To truly use regular expressions well, you should understand this; understand how these regular expressions formats evolved into sed and awk; understand how Perl was developed to replace sed and awk but instead became a very popular web programming language in the late 1990s; and further understand that because nearly every programming language creator acquired Perl experience during that time, nearly every genuinely modern regular expressions format today is based on the format from Perl 5.

Human languages change over time, adapting to new usages and stylings with comparative grace. Computer languages can only change through formal processes, making their specifics oddly immortal (and that includes their specific mistakes). But the evolution of regular expressions formats looks a great deal like the evolution which starts with Latin and ends with languages like Italian, Romanian, and Spanish - if you have the patience to dig up the evidence.

So far, I have software engineering including the following surprising skills:
  • Exobiology
  • Archaeology
  • Linguistics
There's more. There's so much more. For example, you need to extract so much information from the social graph - who uses what technologies, and what tribes a language's "community" breaks down to - that it would be easy to add anthropology to the list. You can find great insights on this in presentations from Francis Hwang and Sarah Mei.

by Giles Bowkett ( at October 14, 2014 03:00 PM

October 12, 2014

Giles Bowkett

Kevin Kelly "What Technology Wants"

Kevin Kelly's What Technology Wants advocates the idea that technology is an adjunct to evolution, and an extension of it, so much so that you can consider it a kingdom of life, in the sense that biologists use the term. Mr. Kelly draws fascinating parallels between convergent evolution and multiple discovery, and brings a ton of very interesting background material to support his argument. However, I don't believe he understands all the background material, and I almost feel as if he's persuading me despite his argument, rather than persuading me by making his argument.

So I recommend this book, but with a hefty stack of caveats. Mr. Kelly veers back and forth between revolutionary truths and "not even wrong" status so rapidly and constantly that you might as well consider him to be a kind of oscillator, producing some sort of waveform defined by his trajectory between these two extremes. The tone of this oscillator is messianic, prophetic, frequently delusional, but also frequently right. The insights are brilliant but the logic is often terrible. It's a combination which can make your head spin.

The author seems to either consider substantiating his arguments beneath him, or perhaps is simply not familiar with the idea of substantiating an argument in the first place. There are plenty of places where the entire argument hinges on things like "somebody says XYZ, and it might be true." No investigation of what it might mean instead if the person in question were mistaken. This is a book which will show you a graph with a line which wobbles so much it looks like a sine wave, and literally refer to that wobbling line as an "unwavering" trend.

He also refers to "the optimism of our age," in a book written in 2010, two years after the start of the worst economic crisis since the Great Depression. The big weakness in my oscillator metaphor, earlier, is that it is an enormous understatement to call the author tone-deaf.

Then again, perhaps he means the last fifty years, or the last hundred, or the last five hundred. He doesn't really clarify which age he's referring to, or in what sense it's optimistic. Or maybe when he says "our age," the implied "us" is not "humanity" or "Americans," but "Californians who work in technology." Mr. Kelly's very much part of the California tech world. He founded Wired, and I actually pitched him on writing a brief bit of commentary in 1995, which Wired published, and that was easily the coolest thing that happened to me in 1995.

Maybe because of that, I'm enjoying this book despite its flaws. It makes a terrific backdrop to Charles Stross's Accelerando. It's full of amazing stuff which is arguably true, very important if true, and certainly worth thinking about, either way. I loved Out Of Control, a book Mr. Kelly wrote twenty years ago about a similar topic, although of course I'm now wondering whether I was less discerning in those days, or if Mr. Kelly's writing went downhill. Take it with a grain of salt, but What Technology Wants is still worth reading.

Returning again to the oscillator metaphor, if a person's writing about big ideas, but they oscillate between revolutionary truths and "not even wrong" status whenever they get down to the nitty-gritty details, then the big ideas they describe probably overlap the truth about half the time. The question is which half of this book ultimately turns out to be correct, and it's a very interesting question.

by Giles Bowkett ( at October 12, 2014 08:04 PM

Shell Scripting: Also Essential For Animators

I'm taking classes in the motion graphics and animation software Adobe After Effects. It needs a cache, and I've put its cache on an external hard drive, to avoid wasting laptop drive space. But I sometimes forget to plug that hard drive in, with the very annoying result that After Effects "helpfully" informs me that it's using a new cache location. I then immediately quit After Effects, plug in the hard drive, re-launch the software, and re-supply the correct cache location in the application's preferences.

Obviously, the solution was to remove After Effects from the OS X Dock, which is a crime against user experience anyway, and replace the dock's launcher icon with a shell script. The shell script only launches After Effects if the relevant hard drive is present and accounted for.

("Vanniman Time Machine" is the name of the hard drive, because reasons.)

by Giles Bowkett ( at October 12, 2014 02:10 PM

October 09, 2014

Giles Bowkett

October 07, 2014

Decyphering Glyph

Thank You Lennart

I (along with about 6 million other people, according to the little statistics widget alongside it) just saw this rather heartbreaking post from Lennart Poettering.

I have not had much occasion to interact with Lennart personally, and (like many people) I have experienced bugs in the software he has written. I have been frustrated by those bugs. I may not have always been charitable in my descriptions of his software. I have, however, always assumed good faith on his part and been happy that he continues making valuable contributions to the free software ecosystem. I haven’t felt the need to make that clear in the past because I thought it was understood.

Apparently, not only is it not understood, there is active hostility directed against his participation. There is constant, aggressive, bad-faith attempts to get him to stop working on the important problems he is working on.

So, Lennart,

Thank you for your work on GNOME, for working on the problem of getting free software into the hands of normal people.

Thank you for furthering the cause of free software by creating PulseAudio, so that we can at least attempt to allow users to play sound on their Linux computers from multiple applications simultaneously without writing tedious configuration files.

Thank you for your work on SystemD, attempting to bring modern system-startup and service-management practices to the widest possible free software audience.

Thank you, Lennart, for putting up with all these vile personal attacks while you have done all of these things. I know you could have walked away; I’m sure that at times, you wanted to. Thank you for staying anyway and continuing to do the good work that you’ve done.

While the abuse is what prompted me to write this, I should emphasize that my appreciation is real. As a long-time user of Linux both on the desktop and in the cloud, I know that my life has been made materially better by Lennart’s work.

This shouldn’t be read as an endorsement of any specific technical position that Mr. Poettering holds. The point is that it doesn’t have to be: this isn’t about whether he’s right or not, it’s about whether we can have the discussion about whether he’s right in a calm, civil, technical manner. In fact I don’t agree with all of his technical choices, but I’m not going to opine about that here, because he’s putting in the work and I’m not, and he’s fighting many battles for software freedom (most of them against our so-called “allies”) that I haven’t been involved in.

The fact that he felt the need to write an article on the hideous state of the free software community is as sad as it is predictable. As a guest on a podcast recently, I praised the Linux community’s technical achievements while critiquing its poisonous culture. Now I wonder if “critiquing” is strong enough; I wonder if I should have given any praise at all. We should all condemn this kind of bilious ad-hominem persecution.

Today I am saying “thank you” to Lennart because the toxicity in our communities is not a vague, impersonal force that we can discuss academically. It is directed at specific individuals, in an attempt to curtail their participation. We need to show those targetted people, regardless of how high-profile they are, or whether they’re paid for their work or not, that they are not alone, that they have our gratitude. It is bullying, pure and simple, and we should not allow it to stand.

Software is made out of feelings. If we intend to have any more free software, we need to respect and honor those feelings, and, frankly speaking, stop trying to make the people who give us that software feel like shit.

by Glyph at October 07, 2014 08:32 AM

October 05, 2014

Giles Bowkett

Backstory For An Anime Series

Many think there is only one Kanye. They are mistaken. There is a Kanye East. There are Kanyes North and South. On the day which the prophets have spoken of, the world will be ready, and the lost Kanyes of legend will return. A great evil will threaten the realm, and the four Kanyes will merge as one to form a Kanye Voltron, and fight fiercely and with great valor for the future of all humanity.

by Giles Bowkett ( at October 05, 2014 12:23 PM

October 03, 2014

Giles Bowkett

One Way To Understand Programming Languages

I'm learning to play the drums, and I got a good DVD from Amazon. It starts off with a rant about drum technique.

The instructor mentions the old rule of thumb that you're best to avoid conversations about religion and politics, and says that he thinks drum technique should be added to the list. He says that during the DVD, he'll tell you that certain moves are the wrong moves to make, but that any time he says that, it really means that the given move is the wrong move to make in the context of the technique he's teaching.

He then goes on to give credit to drummers who play using techniques that are different from his, and to say that it's your job as a drummer to take every technique with a grain of salt and disavow the whole idea of regarding any particular move as wrong. Yet it's also your job as a student of any particular technique to interpret that technique strictly and exactly, if you want to learn it well enough to use it. So when you're a drummer, the word "wrong" should be meaningless, yet when you're a student, it should be very important.

Programming has this tension also. If you're a good programmer, you have to be capable of understanding both One True Way fanatacism and "right tool for the job" indifference. And you have to be able to use any particular right tool for a job in that particular's tool One True Way (or choose wisely between the options that it offers you).

by Giles Bowkett ( at October 03, 2014 10:12 AM

Blue Sky on Mars

Move Fast and Break Nothing

Move Fast and Break Nothing body { background-color: rgb(15,14,13); background-repeat: repeat-x; font-family: "Cabin", sans-serif; color: #C2EBFF; margin: 0; padding: 0; } a { color: rgb(255, 47, 146); } a:hover { color: rgb(255, 138, 216); } .container { width: 900px; margin: 0 auto; } .header { background-color: rgba(255, 47, 146, .25); } .header .name { font-size: 24px; text-decoration: none; color: #fff; } .header .moar { margin-top: -20px; float: right; text-decoration: none; } .lrotate { transform: rotate(-1deg); -webkit-transform: rotate(-1deg); } .rrotate { padding-top: 10px; transform: rotate(1deg); -webkit-transform: rotate(1deg); } .clear { clear: both; } .full { width: 100%; height: 300px; background-color: #111; background-size: cover; } .full h2 { padding-top: 200px; text-align: right; } h1 { margin: 0; padding-top: 150px; font-size: 68px; letter-spacing: -4px; color: rgb(255, 47, 146) } .byline { margin-top: 7px; margin-left: 22px; font-size: 22px; color: #fff; letter-spacing: -1px; } .byline a { text-decoration: none; } h2 { margin-top: 30px; padding: 0; font-size: 48px; color: rgb(255, 47, 146); } h3 { color: rgb(255, 47, 146); margin-top: 100px; font-size: 30px; } .content { padding-bottom: 200px; } .content p { font-size: 22px; margin-top: 40px; font-family: "Avenir Next"; } .content img.lrotate { float: left; margin: 30px 40px 20px 0; } .content img.rrotate { float: right; margin: 30px 0 20px 40px; } .content blockquote { font-size: 18px; color: #fff; background-color: rgba(255, 255, 255, .05); width: 350px; padding: 20px; margin: 25px; } .content li { font-size: 18px; color: rgba(255, 255, 255, .8); line-height: 30px; } .content code { color: #fff; background-color: rgba(255, 255, 255, .1); padding: 0 4px; } .background { margin-top: 50px; padding: 5px 25px; font-size: 20px; background-color: rgba(255, 255, 255, .05); line-height: 30px; } .background a { text-decoration: none; } .background h2 { margin: 0 0 10px -5px; } .slide { width: 400px; } .visual { margin-top: 50px; background-color: rgb(36,25,13); font-size: 24px; } .visual .toggle-button { margin: 0 5px; padding: 15px; border: 1px solid #fff; border-radius: 4px; color: rgba(255,255,255,.75); cursor: pointer; text-decoration: none; } .visual .toggle-button:hover { color: #fff; background-color: rgba(255,255,255,.5); } .visual .toggle { padding: 125px 0; text-align: center; } .visual #slides, .visual #video { margin-bottom: -6px; display: none; } .visual .chevron { font-size: 40px; font-weight: bold; cursor: pointer; text-decoration: none; } .visual .chevron.left { margin-left: -25px; float: left; } .visual .chevron.right { margin-right: -25px; float: right; } .highlight { padding: 30px; } .highlight code { color: #111; } .section-fast { background-color: rgb(48, 42, 46); } .section-process { background-color: rgb(49, 0, 25); } .section-talk { background-color: rgb(39, 43, 36); } .section-closing { color: rgb(39, 43, 36); background-color: rgb(232, 250, 226); } .section-footer { text-align: center; padding: 25px; } .section-footer p { margin: 0; padding: 0; font-size: 16px; } .section-footer a { text-decoration: none; font-weight: bold; } @media screen and (max-width: 1000px) { body { overflow-x: hidden; } .container { width: 100%; } .container p, .container h2 { padding: 0 20px; } .rrotate, .lrotate { } h1 { margin-left: 20px; font-size: 50px; } blockquote { width: 90% !important; padding: 0; margin: 0; } .highlight { margin: 0; } .content { overflow-x: hidden; } .slide { width: 100%; } .rrotate.slide, .lrotate.slide { transform: none; -webkit-transform: none; } }

move fast & break nothing

I gave the closing keynote at the Future of Web Apps in London this October, 2014. This is that talk.

The slides and the full video of the talk are directly below. If you're interested in a text accompaniment, read on for the high-level overview.

moving fast and breaking things

Let's start with the classic Facebook quote, Move fast and break things. Facebook's used that for years: it's a philosophy of trying out new ideas quickly so you can see if they survive in the marketplace. If they do, refine them; if they don't, throw it away without blowing a lot of time on development.

Breaking existing functionality is acceptable... it's a sign that you're pushing the boundaries. Product comes first.

Facebook was known for this motto, but in early 2014 they changed it to Move fast with stablity, among other variations on the theme. They caught a lot of flak from the tech industry for this: something something "they're running away from their true hacker roots" something something. I think that's horseshit. Companies need to change and evolve. The challenges Facebook's facing today aren't the same challenges they're facing ten years ago. A company that's not changing is probably as innovative as tepid bathwater.

Around the time I started thinking about this talk, my friend sent me an IM:

Do you know why kittens and puppies are so cute?

It's so we don't fucking eat them.

Maybe it was the wine I was drinking or the mass quantity of Cheetos® Puffs™ I was consuming, but what she said both amused me and made me think about designing unintended effects inside of a company. A bit of an oxymoron, perhaps, but I think the best way to get things done in a company isn't to bash it over your employee's heads every few hours, but to instead build an environment that helps foster those effects. Kittens don't wear signs on them that explicitly exclaim "DON'T EAT ME PLS", but perhaps their cuteness help lead us towards being kitten-carnivorous-averse. Likewise, telling your employees "DON'T BREAK SHIT" might not be the only approach to take.

I work at GitHub, so I'm not privvy to what the culture is like at Facebook, but I can take a pretty obvious guess as of the external manifestations of their new motto: it means they break fewer APIs on their platform. But the motto is certainly more inward-facing rather than external-facing. What type of culture does that make for? Can we still move quickly? Are there parts of the product we can still break? Are there things we absolutely can't break? Can we build product in a safer manner?

This talk explores those questions. Specifically I break my talk into three parts: code, internal process in your development team and company, and the talk, discussion, and communication surrounding your process.


I think move fast and break things is fine for many features. But the first step is identifying what you cannot break. These are things like billing code (as much as I'd like to, I probably shouldn't accidentally charge you a million dollars and then email you later with an "oops, sorry!"), upgrades (hardware or software upgrades can always be really dicey to perform), and data migrations (it's usually much harder to rollback data changes).

The last two years we've been upgrading GitHub's permissions code to be faster, safer, cleaner, and generally better. It's a scary process, though. This is an absolute, 100% can't-ever-break use case. The private repository you pay us for can't suddenly be flipped open to the entire internet because of a bug in our deployed code. 0.02% failure isn't an option; 0% failures needs to be mandatory.

But we like to move fast. We love to deploy new code incrementally hundreds of times a day. And there's good reason for that: it's safer overall. Incremental deploys are easier to understand and fix than one gigantic deploy once a year. But it lends itself to those small bugs, which, in this permissions case, is unacceptable.

So tests are good to have. This is unsurprising to say in this day and age; everyone generally understands now that testing and continuous integration is absolutely critical to software development. But that's not what's at stake here. You can have the best, most comprehensive test suite in the world, but tests are still different from production.

There are a lot of reasons for this. One is data: you may have flipped some bit (accidentally or intentionally) for some tables for two weeks back in December of 2010, and you've all but forgotten about that today. Or your cognitive model of the system may be idealized. We noticed that while doing our permissions overhaul. We'd have a nice, neat table of all the permissions of users, organizations, teams, public and private repositories, and forks, but we'd notice that the neat table would fall down on very arcane edge cases once we looked at production data.

And that's the rub: you need your tests to pass, of course, but you also need to verify that you don't change production behavior. Think of it as another test suite: for better or worse, the behavior deployed now is the state of the system from your users' perspective. You can then either fix the behavior or update your tests; just make sure you don't break the user experience.

Parallel code paths

One of the approaches we've taken is through the use of parallel code paths.

What happens is this: a request will come in as usual and run the existing (old) code. At the same time (or just right after it executes), we'll also run the new code that we think will be better/faster/harder/stronger (pick one). Once all that's done, return whatever the existing (old) code returns. So, from the user's perspective, nothing has changed. They don't see the effects of the new code at all.

There's some caveats, of course. In this case, we're typically performing read-only operations. If we're doing writes, it takes a bit more smarts to either write your code to make sure it can run both branches of code safely, or you can rollback the effects of the new code, or the new code is a no-op or otherwise goes to a different place entirely. Twitter, for example, has a very service-oriented architecture, so if they're spinning up a new service they redirect traffic and dual-write to the new service so they can measure performance, accuracy, catch bugs, and then throw away the redundant data until they're ready to switch over all traffic for real.

We wrote a Ruby library named Science to help us out with this. You can check it out and run it yourself in the github/dat-science repository. The general idea would be to run it like this:

  science "my-cool-new-change" do |e|
    e.control   { user.existing_slow_method }
    e.candidate { user.new_code_we_think_is_great }

It's just like when you Did Science™ in the lab back in school growing up: you have a control, which is your existing code, and a candidate, which is the new code you want to introduce. The science block makes sure both are run appropriately. The real power happens with what you can do after the code runs, though.

We use Graphite literally all over the company. If you haven't seen Coda Hale's Metrics, Metrics Everywhere talk, do yourself a favor and give it a watch. Graphing behavior of your application gives you a ton of insight into your entire system.

Science (and its sister library, github/dat-analysis) can generate a graph of the number of times the code was run (the top blue bar to the left) and compare it to the number of mismatches between the control and the candidate (in red, on the bottom). In this case you see a downward trend: the developer saw that their initial deploy might have missed a couple use cases, and over subsequent deploys and fixes the mismatches decreased to near-zero, meaning that the new code is matching production's behavior in almost all cases.

What's more, we can analyze performance, too. We can look at the average duration of the two code blocks and confirm if the new code we're running is faster, but we can also break down requests by percentile. In the slide to the right, we're looking at the 75th and 99th percentile, i.e. the slowest of requests. In this particular case, our candidate code is actually quite a bit slower than the control: perhaps this is acceptable given the base case, or maybe this should be huge red sirens that the code's not ready to deploy to everyone yet... it depends on the code.

All of this gives you evidence to prove the safety of your code before you deploy it to your entire userbase. Sometimes we'll run these experiments for weeks or months as we widdle down all the — sometimes tricky — edge cases. All the while, we can deploy quickly and iteratively with a pace we've grown accustomed to, even on dicey code. It's a really nice balance of speed and safety.

build into your existing process

Something else I've been thinking a lot about lately is how your approach to building product is structured.

Typically process is added to a company vertically. For example: say your team's been having some problems with code quality. Too many bugs have been slipping into production. What a bummer. One way to address that is to add more process to your process. Maybe you want your lead developers to review every line of code before it gets merged. Maybe you want to add a layer of human testing before deploying to production. Maybe you want a code style audit to give you some assurance of new code maintainability.

These are all fine approaches, in some sense. It's not problematic to want to achieve the goal of clean code; far from it, in fact. But I think this vertical layering of process is really what can get aggravating or just straight-up confusing if you have to deal with it day in, day out.

I think there's something to be said for scaling the breadth of your process. It's an important distinction. By limiting the number of layers of process, it becomes simpler to explain and conceptually understand (particularly for new employees). "Just check continuous integration" is easier to remember than "push your code, ping the lead developers, ping the human testing team, and kick off a code standards audit".

We've been doing more of this lateral process scaling at GitHub informally, but I think there's more to it than even we initially noticed. Since continuous integration is so critical for us, people have been adding more tests that aren't necessarily tests in the classic sense of the word. Instead of "will this code break the application", our tests are more and more measuring "will this code be maintainable and more resilent towards errors in the future".

For example, here are a few tests we've added that don't necessarily have user-facing impact but are considered breaking the build if they go red:

  • Removing a CSS declaration without removing the associated class attribute in the HTML
  • ...and vice versa: removing a class attribute without cleaning up the CSS
  • Adding an <img> tag that's not on our CDN, for performance, security, and scaling reasons
  • Invalid SCSS or CoffeeScript (we use SCSS-Lint and CoffeeLint)

None of these are world-ending problems: an unspecified HTML class doesn't really hurt you or your users. But from a code quality and maintainability perspective, yeah, it's a big deal in the long term. Instead of having everyone focus on spotting these during code review, why not just shove it in CI and let computers handle the hard stuff? It frees our coworkers up from gruntwork and lets them focus on what really matters.

Incidentally, some of these are super helpful during refactoring. Yesterday I shipped some new dashboards on, so today I removed the thousands of lines of code from the old dashboard code. I could remove the code in bulk, see which tests fail, and then go in and pretty carelessly remove the now-unused CSS. Made it much, much quicker to do because I didn't have to worry about the gruntwork.

And that's what you want. You want your coworkers to think less about bullshit that doesn't matter and spend more consideration on things that do. Think about consolidating your process. Instead of layers, ask if you can merge them into one meeting. Or one code review. Or automate the need away entirely. The layers of process are what get you.


In bigger organizations, the number of people that need to be involved in a product launch grows dramatically. From the designers and developers who actually build it, to the marketing team that tells people about it, to the ops team who scales it, to the lawyers that legalize it™... there's a lot of chefs in the kitchen. If you're releasing anything that a lot of people will see, there's a lot you need to do.

Coordinating that can be tricky.

Apple's an interesting company to take a look at. Over time, a few interesting tidbits have spilled out of Cupertino. The Apple New Product Process (ANPP) is, at its core, a simple checklist. It goes into great detail about the process of releasing a product, from beginning to end, from who's responsible to who needs to be looped into the process before it goes live.

The ANPP tends to be at the very high-level of the company (think Tim Cook-level of things), but this type of approach sinks deeper down into individual small teams. Even before a team starts working on something, they might make a checklist to prep for it: do they have appropriate access to development and staging servers, do they have the correct people on the team, and so on. And even though they manage these processes in custom-built software, what it is at its core is simple: it's a checklist. When you're done with something, you check it off the list. It's easy to collaborate on, and it's easy to understand.

Think back to every single sci-fi movie you've ever watched. When they're about to launch the rocket into space, there's a lot of "Flip MAIN SERIAL BUS A to on". And then the dutiful response: "Roger, MAIN SERIAL BUS A is on". You don't see many responses of, "uh, Houston, I think I'm more happier when MAIN SERIAL BUS A is at like, 43% because SECONDARY SERIAL BUS B is kind of a jerk sometimes and I don't trust goddamn serial busses what the hell is a serial bus anyway yo Houston hook a brother up with a serial limo instead".

And there's a reason for that: checklists remove ambiguity. All the debate happens before something gets added to the checklist... not at the end. That means when you're about to launch your product — or go into space — you should be worrying less about the implementation and rely upon the process more instead. Launches are stressful enough as-is.


Something else that becomes increasingly important as your organization grows is that of code ownership. If the goal is to have clean, relatively bug-free code, then your process should help foster an environment of responsibility and ownership of your piece of the codebase.

If you break it, you should fix it.

At GitHub, we try to make that connection pretty explicit. If you're deploying your code and your code generates a lot of errors, our open source chatroom robot — Hubot — will notice and will mention you in chat with a friendly message of "hey, you were the last person to deploy and something is breaking- can you take a look at it?". This reiterates the idea that you're responsible for the code that you put out. That's good because, as it turns out, the people who wrote the code are typically the people who can most easily fix it. Beyond that, forcing your coworkers to always clean up your mess is going to really suck over time (for them).

There are plenty of ways to keep people responsible. Google, for example, uses OWNERS files in Chrome. This is a way of making explicit the ownership of a file or entire directories of the project. The format of an actual OWNERS file can be really simple — shout out to simple systems like flat files and checklists — but they serve two really great purposes:

  • They enforce quality. If you're an owner of an area of code, any new contribution to your code requires your signoff. Since you are in a somewhat elevated position of responsibility, it's on you to fight to not allow potentially buggy code into your area.
  • It encourages mentorship. Particularly in open source projects like Chromium, it can be intimidating to get started with your first contribution. OWNERS files make it explicit about who you might want to ask about your code or even about the high-level discussion before you get started.

You can tie your own systems together closer, too. In Haystack, our internal error tracking service at GitHub, we have pretty deep hooks into our code itself. In a controller, for example, we might have code that looks like this:

class BranchesController
  areas_of_reponsibility :git

This marks this particular file as being the responsibility of the @github/git team, the team that handles Git-related data and infrastructure. So, when we see a graph in Haystack like the one to the right, we can see that there's something breaking in a particular page, and we can quickly see which teams are responsible for the code that's breaking, since Haystack knows to look into the file with the error and bubble up these areas of responsibility. From here, it's a one-click operation to open an issue on our bug tracker about it, mentioning the responsible team in it so they can fix it.

Look: bugs do happen. Even if you move fast and break nothing, well, you're still bound to break something at some point. Having a culture of responsibility around your code helps you address those bugs quickly, in an organized manner.

talking & communicating

I've given a lot of talks and written a lot of blog posts about software development and teams and organizations. Probably one way to sum them all up is: more communication. I think companies function better by being more transparent, and if you build your environment correctly, you can end up with better code, a good remote work culture, and happier employees.

But god, more communication means a ton of shit. Emails. Notifications. Text messages. IMs. Videos. Meetings.

If everyone is involved with everything... does anyone really have enough time to actually do anything?

Having more communication is good. Improving your communication is even better.

be mindful of your coworker's time

It's easy to feel like you deserve the time of your coworkers. In some sense, you do: you're trying to improve some aspect of the company, and if your coworker can help out with that, then the whole company is better off. But every interaction comes with a cost: your coworker's time. This is dramatically more important in creative and problem solving fields like computer science, where being in the zone can mean the difference between a really productive day and a day where every line of code is a struggle to write. Getting pulled out of the zone can be jarring, and getting back into that mental mindset can take a frustratingly long time.

This goes doubly so for companies with remote workers. It's easy to notify a coworker through chat or text message or IM that you need their help with something. Maybe a server went down, or you're having a tough problem with a bug in code you're unfamiliar with. If you're a global company, timezones can become a factor, too. I was talking to a coworker about this, and after enough days of being on-call, she came up with a hilarious idea that I love:

  1. You find you need help with something.
  2. You page someone on your team for help.
  3. They're sleeping. Or out with their kids. Or any level of "enjoying their life".
  4. They check their message and, in doing so, their phone takes a selfie of them and pastes it into the chat room.
  5. You suddenly feel worse.

We haven't implemented this yet (and who knows if we will), but it's a pretty rad thought experiment. If you could see the impact your actions on your coworker's life, would it change your behavior? Can you build something into your process or your tools that might help with this? It's interesting to think about.

I think this is part of a greater discussion on empathy. And empathy comes in part from seeing real pain. This is why many suggest that developers handle some support threads. A dedicated support team is great, but until you're actually faced with problems up-close, it's easy to avoid these pain points.

institutional teaching

We have a responsibility to be teachers — that this should be a central part of [our] jobs... it's just logic that some day we won't be here.

— Ed Catmull, co-founder of Pixar, Creativity, Inc.

I really like this quote for a couple reasons. For one, this can be meant literally: we're all going to fucking die. Bummer, right? Them's the breaks, kid.

But it also means that people move around. Sometimes people will quit or get fired from the company, and sometimes it just means people moving around the company. The common denominator is that our presence is merely temporary, which means we're obligated, in part, to spreading the knowledge we have across the company. This is great for your bottom line, of course, but it's also just a good thing to do. Teaching people around you how to progress in their careers and being a resource for their own growth is a very privileged position to be in, and one we shouldn't take lightly.

So how do we share knowledge without being lame? I'm not going to lie: part of the reason I'm working now is because I don't have to go to school anymore. Classes are so dulllllllllll. So the last thing I want to have to deal with is some formal, stuffy process that ultimately doesn't even serve as a good foundation to teach anyone anything.

Something that's grown out of how we work is a concept we call "ChatOps". @jnewland has a really great talk about the nitty-gritty of ChatOps at GitHub, but in short: it's a way of handling devops and systems-level work at your company in front of others so that problems can be solved and improved upon collaboratively.

If something breaks at a tech company, a traditional process might look something like this:

  1. Something breaks.
  2. Whoever's on-call gets paged.
  3. They SSH into... something.
  4. They fix it... somehow.

There's not a lot of transparency. Even if you discuss it after the fact, the process that gets relayed to you might not be comprehensive enough for you to really understand what's going on. Instead, GitHub and other companies have a flow more like this:

  1. Something breaks.
  2. Whoever's on-call gets paged.
  3. They gather information in a chat room.
  4. They fix it through shared tooling, in that chat room, in front of (or leveraging the help of) other employees.

This brings us a number of benefits. For one, you can learn by osmosis. I'm not on the Ops team, but occasionally I'll stick my head into their chat room and see how they tackle a problem, even if it's a problem I won't face in my normal day-to-day work. I gain context around how they approach problems.

What's more, if others are studying how they tackle a problem in real-time, the process lends itself to improvement. How many times have you sat down to pair program with someone and were blown away by the one or two keystrokes they use to solve a process that takes you three minutes? If you code in a vacuum, you don't have the opportunity to make quick improvements. If I'm watching you run the same three commands in order to run diagnostics on a server, it's easier as a bystander to think, hey, why don't we wrap those commands up in one command that does it all for us? Those insights can be incredibly valuable and, in time, lead to massive, massive productivity and quality improvements.

This requires some work on tooling, of course. We use Hubot to act as a sort of shared collection of shell scripts that allow us to quickly address problems in our infrastructure. Some of those scripts include hooks into PagerDuty to trigger pages, code that leverages APIs into AWS or our own datacenter, and, of course, scripts that let us file issues and work with our repositories and teams on GitHub. We have hundreds or thousands of commands now, all gradually built up and hardened over time. The result is an incredible amount of tooling around automating our response to potential downtime.

This isn't limited to just working in chatrooms, though. Recently the wifi broke at our office on a particular floor. We took the same approach to fix it, except it was in a GitHub issue instead of real-time chat. Our engineers working on the problem pasted in screenshots of the status of our routers, the heatmaps of dead zones stemming from the downtime, and eventually traced cables through switches until we found a faulty one, taking photos each step of the way and adding them to the issue so we had a papertrail of which cabinets and components were affected. It's amazing how much you can learn from such a non-invasive process. If I'm not interested in learning the nitty-gritty details, I can skip the thread. But if I do want to learn about it... it's all right there, waiting for me.


The Blue Angels are a United States Navy demonstration flight squadron. They fly in air shows around the world, maneuvering in their tight six-fighter formations 18 inches apart from one another. The talent they exhibit is mind-boggling.

Earlier this year I stumbled on a documentary on their squadron from years back. There's a specific 45-second section in it that really made me think. It describes the process the Blue Angels go through in order to give each other feedback. (That small section of the video is embedded to the left. Watch the whole thing if you want, though, who am I to stop you.)

So first of all, they're obviously and patently completely nuts. The idea that you can give brutally honest feedback without worrying about interpersonal relationships is, well, not really relevant to the real world. They're superhuman. It's not every day you can tell your boss that she fucked up and skip out of the meeting humming your favorite tune without fear of repercussions. So they're nuts. But it does make sense: a mistake at their speeds and altitude is almost certainly fatal. A mistake for us, while writing software that helps identify which of your friends liked that status update about squirrels, is decidedly less fatal.

But it's still a really interesting ideal to look up to. They do feedback and retrospectives that take twice as long as the actual event itself. And they take their job of giving and receiving feedback seriously. How can we translate this idealized version of feedback in our admittedly-less-stressful gigs?

Part of this is just getting better at receiving feedback. I'm fucking horrible at this. You do have to have a bit of a thicker skin. And it sucks! No one wants to spend a few hours — or days, or months — working on something, only to inevitably get the drive-by commenter who finds a flaw in it (either real or imagined). It's sometimes difficult to not take that feedback personally. That you failed. That you're not good enough to get it perfect on the first or second tries. It's funny how quickly we forget how iterative software development is, and that computers are basically stacking the deck against us to never get anything correct on the first try.

Taking that into account, though, it becomes clear how important giving good feedback is. And sometimes this is just as hard to do. I mean, someone just pushed bad code! To your project! To your code! I mean, you're even in the damn OWNERS file! The only option is to rain fire and brimestone and hate and loathing on this poor sod, the depths of which will cause him to surely think twice about committing such horrible code and, if you're lucky, he'll quit programming altogether and become a dairy farmer instead. Fuck him!

Of course, this isn't a good approach to take. Almost without fail, if someone's changing code, they have a reason for it. It may not be a good reason, or the implementation might be suspect, but it's reason nonetheless. And being cognizant of that can go a long way towards pointing them in the right direction. How you piece your words together are terribly important.

And this is something you should at least think about, if not explicitly codify across your whole development team entirely. What do you consider good feedback? How can you promote understanding and positive approaches in your criticism of the code? How can you help the submitter learn and grow from this scenario? Unfortunately these questions don't get asked enough, which creates a self-perpetuating cycle of cynics and aggressive discussion.

That sucks. Do better.

move fast with a degree of caution

Building software is hard. Because yeah, moving quickly means you can do more for people. It means you can see what works and what doesn't work. It means your company sees real progress quicker.

But sometimes it's just as important to know what not to break. And when you work on those things, you change how you operate so that you can try to get the best of both worlds.

Also, try flying fighter jets sometimes. It can't be that hard.

October 03, 2014 12:00 AM

October 01, 2014

Greg Linden

Quick links

What caught my attention lately:
  • 12% of Harvard is enrolled in CS 50: "In pretty much every area of study, computational methods and computational thinking are going to be important to the future" ([1])

  • Excellent "What If?" nicely shows the value of back-of-the-envelope calculations and re-thinking what exactly it is you want to do ([1])

  • The US has almost no competition, only local monopolies, for high speed internet ([1] [2])

  • You can't take two large, dysfunctional, underperforming organizations, mash them together, and somehow make diamonds. When you take two big messes and put them together, you just get a bigger mess. ([1])

  • "Yahoo was started nearly 20 years ago as a directory of websites ... At the end of 2014, we will retire the Yahoo Directory." ([1] [2])

  • Investors think that Yahoo is essentially worthless ([1])

  • "At a moment when excitement about the future of robotics seems to have reached an all-time high (just ask Google and Amazon), Microsoft has given up on robots" ([1])

  • "Firing a bunch of tremendously smart and creative people seems misguided. But hey—at least they own Minecraft!" ([1])

  • "Macs still work basically the same way they did a decade ago, but iPhones and iPads have an interface that's specifically designed for multi-touch screens" ([1] [2])

  • On the difficulty of doing startups ([1] [2])

  • "Be glad some other sucker is fueling the venture capital fire" ([1])

  • "Just how antiquated the U.S. payments system has become" ([1])

  • Is everyone grabbing money from online donations to charities? Visa's charge fee on charities is only 1.35%, but the lowest online payment system for charities charges 2.2% and most charge much more than that. ([1])

  • "For most people, the risk of data loss is greater than the risk of data theft" ([1])

  • Password recovery "security questions should go away altogether. They're so dangerous that many security experts recommend filling in random gibberish instead of real answers" ([1])

  • Brilliantly done, free, open source, web-based puzzle game with wonderfully dark humor about ubiquitous surveillance ([1])

  • How Udacity does those cool transparent hands in its videos ([1])

  • There's just a bit of interference when you move your hand above the phone, just enough interference to detect gestures without using any additional power or sensors ([1] [2])

  • Small, low power wireless devices powered by very small fluctuations in temperature ([1] [2])

  • Cute intuitive interface for transferring data between PC and mobile ([1] [2])

  • "Federal funding for biomedical research [down 20%] ... forcing some people out of science altogether" ([1])

  • Another fun example of virtual tourism ([1])

  • Ig Nobel Prizes: "Dogs prefer to align themselves to the Earth's north-south magnetic field while urinating and defecating" ([1])

  • Xkcd: "In CS, it can be hard to explain the difference between the easy and the virtually impossible" ([1] [2])

  • Dilbert: "That process sounds like a steaming pile of stupidity that will beat itself to death in a few years" ([1])

  • Dilbert on one way to do job interviews ([1])

  • The Onion: "Startup Very Casual About Dress Code, Benefits" ([1])

  • Hilarious South Park episode, "Go Fund Yourself", makes fun of startups ([1])

by Greg Linden ( at October 01, 2014 07:08 PM

September 22, 2014

Giles Bowkett

A Pair of Quick Animations

Five seconds or less, done in Adobe After Effects.

I made the music for this one. No special effects, just shapes, luma masks, and blending modes.

This one is mostly special effects.

by Giles Bowkett ( at September 22, 2014 10:40 AM

September 20, 2014

Giles Bowkett

Why Scrum Should Basically Just Die In A Fire

Conversations with Panda Strike CEO Dan Yoder inspired this blog post.

Scrum, the Agile methodology allegedly favored by Google and Spotify, is a mess.

Consider story points. If you're not familiar with Scrum, here's how they work: you play a game called "Planning Poker," where somebody calls out a task, and then counts down from three to one. On one, the engineers hold up a card with the number of "story points" which represents the relative cost they estimate for performing the task.

So, for example, a project manager might say "integrating our login system with OpenAuth and Bitcoin," and you might put up the number 20, because it's the maximum allowable value.

Wikipedia describes the goal of this game:

The reason to use Planning Poker is to avoid the influence of the other participants. If a number is spoken, it can sound like a suggestion and influence the other participants' sizing. Planning Poker should force people to think independently and propose their numbers simultaneously. This is accomplished by requiring that all participants show their card at the same time.

I have literally never seen Planning Poker performed in a way which fails to undermine this goal. Literally always, as soon as every engineer has put up a particular number, a different, informal game begins. If it had a name, this informal game would be called something like "the person with the highest status tells everybody else what the number is going to be." If you're lucky, you get a variant called "the person with the highest status on the dev team tells everybody else what the number is going to be," but that's as good as it gets.

Wikipedia gives the alleged structure of this process:
  • Everyone calls their cards simultaneously by turning them over.
  • People with high estimates and low estimates are given a soap box to offer their justification for their estimate and then discussion continues.
  • Repeat the estimation process until a consensus is reached. The developer who was likely to own the deliverable has a large portion of the "consensus vote", although the Moderator can negotiate the consensus.
  • To ensure that discussion is structured; the Moderator or the Project Manager may at any point turn over the egg timer and when it runs out all discussion must cease and another round of poker is played. The structure in the conversation is re-introduced by the soap boxes.
In practice, this "soap box" usually consists of nothing more than questions like "20? Really?". And I've never seen the whole "rinse and repeat" aspect of Planning Poker actually happen; usually, the person with lower status simply agrees to whatever the person with higher status wants the number to be.

In fairness to everybody who's tried this process and seen it fail, how could it not devolve? A nontechnical participant has, at any point, the option to pull out an egg timer and tell technical participants "thirty seconds or shut the fuck up." This is not a process designed to facilitate technical conversation; it's so clearly designed to limit such conversation that it almost seems to assume that any technical conversation is inherently dysfunctional.

It's ironic to see conversation-limiting devices built into Agile development methodologies, when one of the core principles of the Agile Manifesto is the idea that "the most efficient and effective method of conveying information to and within a development team is face-to-face conversation," but I'll get to that a little later on.

For now, I want to point out that Planning Poker isn't the only aspect of Scrum which, in my experience, seems to consistently devolve into something less useful. Another core piece of Scrum is the standup meeting.

You probably know this, but just in case, the idea is that the team for a particular project gathers daily, for a quick, 15-minute meeting. This includes devs, QA, project manager(s), designers, and anyone else who will be working to make the project succeed, or who needs to stay up-to-date with the project's progress. The standup's designed to counter an even older tradition of long, stultifying, mandatory meetings, where a few people talk, and everybody else loses their whole day to no benefit whatsoever. Certainly, if you've got that going on in your company, anything which gets rid of it is an improvement.

However, as with Planning Poker, the 15-minute standup decays very easily. I've twice seen the "15-minute standup" devolve into half-hour or hour-long meetings where everybody stands, except for management.

At one company, a ponderous, overcomplicated web app formed the centerpiece of the company's Scrum implementation. Somebody had to sit to operate this behemoth, and since that was an informal privilege, it usually went to whomever could command it. In other words: management.

At another company, the Scrum decay took a different route. As with the egg timer in Planning Poker, Scrum standups offer an escape clause. In standups, you can defer discussion of involved topics to a "parking lot," which is where an issue lands if it's too complex to fit within the meeting's normal 15-minute parameters (which also include some constraints on what you can discuss, to prevent talkative or unfocused people from over-lengthening the meeting).

At this second company, virtually everything landed in the parking lot, and it became normal for the 15-minute standup to be a 15-minute prelude to a much longer meeting. We'd just set the agenda during the standup, and the parking lot would be the actual meeting. These standups typically took place in a particular person's office. Since arriving at the parking lot meant the standup was over, that person, whose office we were in, would feel OK about sitting down in their own, personal chair. But the office wasn't big enough to bring any new chairs into, so everyone else had to stand. The person whose office we were always in? A manager.

Scrum's standups are designed to counteract an old tradition of overly long, onerous, dull meetings. However, at both these companies, they replaced that ancient tradition with a new tradition of overly long, onerous, dull meetings where management got to sit down, and everybody else had to stand. Scrum's attempt at creating a more egalitarian process backfired, twice, in each case creating something more authoritarian instead.

To be fair to Scrum, it's not intended to work that way, and there's an entire subgenre of "Agile coaching" consultants whose job is to repair broken Scrum implementations at various companies. This is pure opinion, but my guess is that's a very lucrative market, because as far as I can tell, Scrum implementations often break.

I recommend just skimming the first few seconds of this.

Scrum's ready devolution springs from major conceptual flaws.

Scrum's an Agile development methodology, and one of its major goals is sustainable development. However, it works by time-boxing efforts into iterations of a week or two in length, and refers to these iterations as "sprints." Time-boxed iterations are very useful, but there's a fundamental cognitive dissonance between "sprints" and "sustainable development," because there is no such thing as a sustainable sprint.

This man's pace is probably not optimized for sustainability.

Likewise, your overall list of goals, features, and work to accomplish is referred to as the "backlog." This is true even on a greenfield project. On day 1, you have a backlog.

Another core idea of the Agile Manifesto, the allegedly defining document for Agile development methodologies: "working software is the primary measure of progress." Scrum disregards this idea in favor of a measure of progress called "velocity." Basically, velocity is the number of "story points" successfully accomplished divided by the amount of time it took to accomplish them.

As I mentioned at the top of the post, a lot of this thinking comes from conversations with my new boss, Panda Strike CEO Dan Yoder. Dan told me he's literally been in meetings where non-technical management said things like, "well, you got through [some number] story points last week, and you only got through [some smaller number] this week, and coincidentally, I noticed that [some developer's name] left early yesterday, so it looks pretty easy who to blame."

Of course, musing, considering, mulling things over, and coming to realizations all constitute a significant amount of the actual work in programming. It is impossible to track whether these realizations occur in the office or in the shower. Anecdotally, it's usually the shower. Story points, meanwhile, are completely made-up numbers designed to capture off-the-cuff estimates of relative difficulty. Developers are explicitly encouraged to think of story points as non-binding numbers, yet velocity turns those non-binding estimates into a number they can be held accountable for, and which managers often treat as a synonym for productivity. "Agile" software exists to track velocity, as if it were a meaningful metric, and to compare the relative velocity of different teams within the same organization.

This is an actual thing which sober adults do, on purpose, for a living.

"Velocity" is really too stupid to examine in much further detail, because it very obviously disregards this whole notion of "working software as a measure of progress" in favor of completely unreliable numbers based on almost nothing. (I'm not proud to admit that I've been on a team where we spent an entire month to build an only mostly-functional shopping cart, but I suppose it's some consolation that our velocity was acceptable at the time.)

But, just to be clear, one of velocity's many flaws is that different teams are likely to make different off-the-cuff estimates, as are different members of the same team. Because of this, you can only really garner anything approaching meaningful insight from these numbers if you compare the ratio of estimated story points to accomplished story points on a per-team, per-week basis. Or, indeed, a per-individual, per-week one. And even then, you're more likely to learn something about a team's or individual's ability to make ballpark estimates than their actual productivity.

Joel Spolsky has an old but interesting blog post about a per-individual, velocity-like metric based on actually using math like a person who understands it, not a person who regards it as some kind of incomprehensible yet infallible magic. However, if there's anything worth keeping in the Agile Manifesto, it's the idea that working software is the primary measure of progress. Indeed, that's the huge, hilarious irony at the center of this bizarre system of faux accountability: with the exception of a few Heisenbugs, engineering work is already inherently more accountable than almost any other kind of work. If you ask for a feature, your team will either deliver it, or fail to deliver it, and you will know fairly rapidly.

If you're tracking velocity, your best-case scenario will be that management realizes it means nothing, even though they're tracking it anyway, which means spending money and time on it. This useless expense is what Andy Hunt and Dave Thomas termed a broken window in their classic book The Pragmatic Programmer - a sign of careless indifference, which encourages more of the same. That's not what you want to have in your workplace.

Sacrificing "working software as a measure of progress" to meaningless numbers that your MBAs can track for no good reason is a pretty serious flaw in Scrum. It implies that Scrum's loyalty is not to the Agile Manifesto, nor to working software, nor high-quality software, nor even the success of the overall team or organization. Scrum's loyalty, at least as it pertains to this design decision, is to MBAs who want to point at numbers on a chart, whether those numbers mean anything or not.

I've met very nice MBAs, and I hope everyone out there with an MBA gets to have a great life and stay employed. However, building an entire software development methodology around that goal is, in my opinion, a silly mistake.

The only situation I can think of where a methodology like Scrum could have genuine usefulness is on a rescue engagement, where you're called in as a consultant to save a failing project. In a situation like this, you can track velocity on a team basis to show your CEO client that development's speeding up. Meanwhile, you work on the real question, which is who to fire, because that's what nearly every rescue project comes down to.

In other words, in its best-case scenario, Scrum's a dog-and-pony show. But that best-case scenario is rare. In the much more common case, Scrum covers up the inability to recruit (or even recognize) engineering talent, which is currently one of the most valuable things in the world, with a process for managing engineers as if they were cogs in a machine, all of equal value.

And one of the most interesting things about Scrum is that it tries to enhance the accountability of a field of work where both failure and success are obvious to the naked eye - yet I've never encountered any similarly elaborate system of rituals whose major purpose is to enhance the accountability of fields which have actual accountability problems.

Although marketing is becoming a very data-driven field, and although this sea change began long before the Web existed at all - Dan Kennedy's been writing about data-driven marketing since at least the 1980s - it's still a fact that many marketers do totally unaccountable work that depends entirely on public perception, mood, and a variety of other factors that are inherently impossible to measure. The oldest joke in marketing: "only half my advertising works, but I don't know which half."

And you never will.

YouTube ads have tried to sell me a service to erase the criminal record I don't have. They've reminded me to use condoms during the gay sex that I don't have either. They've also tried to get me to buy American trucks and country music, neither of which will ever happen. No disrespect to the gay ex-convicts out there who do like American trucks and country music, assuming for the sake of argument that this demographic even exists, it's just not my style. Similarly, Facebook's "targeted" ads usually come from politicians I dislike, and Google's state-of-the-art, futuristic, probablistic, "best-of-breed" ads are worse. The only time they try to sell me anything I even remotely want is when I've researched something expensive but decided not to buy it yet. Then the ad follows me around every web site I visit for the next month.

Please buy it. Please. You looked at it once.

Even in 2014, marketing involves an element of randomness, and probably always will, until the end of time.

Anyway, Scrum gives you demeaning rituals to dumb down your work so that people who will never understand it can pretend to understand it. Meanwhile, work which is genuinely difficult to track doesn't have to deal with this shit.


I don't think highly of Scrum, but the problem here goes deeper. The Agile Manifesto is flawed too. Consider this core principle of Agile development: "business people and developers must work together."

Why are we supposed to think developers are not business people?

If you join (or start) a startup, you may have to do marketing before your company can hire a marketing person. The same is true for accounting, for sales, for human resources, and for just about anything that any reasonable person would call business. You're in a similar situation if you freelance or do consulting. You're definitely in a better position for any of these things if you hire someone who knows what they're doing, of course, but there's a large number of developers who are also business people.

Perhaps more importantly, if you join or start a startup, you can knock the engineering out of the park and still end up flat fucking broke if the marketing people don't do a good job. But you're probably not going to demand that your accountants or your marketing people jump through bizarre, condescending hoops every day. You're just going to trust them to do their jobs.

This is a reasonable way to treat engineers as well.

By the way, despite that little Dilbert strip a few paragraphs above, my job title at Panda Strike is Minister of Propaganda. I'm basically the Director of Marketing, except that to call yourself a Director of Marketing is itself very bad marketing when you want to communicate with developers, who traditionally mistrust marketing for various reasons (many of them quite legitimate). This is the same reason the term "growth hacker" exists, but as a job title, that phrase just reeks of dishonesty. So I went with Minister of Propaganda to acknowledge the vested interest I have in saying things which benefit my company.

However, despite having marketing responsibilities, my first act upon joining Panda Strike was to write code which evaluates code. I tweaked my git analysis scripts to produce detailed profiles of the history of many of the company's projects, both open source and internal products, so that I could get a very specific picture of how development works at Panda Strike, and how our projects have been built, and who built them, and when, and with which technologies, and so on.

As an aside, I first developed this technique on a Rails rescue project. It was the first thing I did on the project, but the CTO, having an arrogant and aloof attitude, had no idea. So on my first day, after I did this work, he introduced me to the rest of the team, telling me their names, but nothing else about them. But I recognized the names from my analysis of the git log. I noticed that the number one JavaScript committer had a cynical and sarcastic expression, that most of the team had three commits or less, and that the number one Ruby committer wasn't anywhere in the building.

This CTO who had told me nothing then said to me, "OK, dazzle me." As you can imagine, I did not dazzle him. I fired him. (Or, more accurately, I and my colleagues persuaded his CEO to fire him.)

Anway, the whole point of this is simple: there's absolutely no reason to assume that a developer is not a business person. It's a ridiculous assumption, and the world is full of incredibly successful counterexamples.

The Agile Manifesto might also be to blame for the Scrum standup. It states that "the most efficient and effective method of conveying information to and within a development team is face-to-face conversation." In fairness to the manifesto's authors, it was written in 2001, and at that time git log did not yet exist. However, in light of today's toolset for distributed collaboration, it's another completely implausible assertion, and even back in 2001 you had to kind of pretend you'd never heard of Linux if you really wanted it to make sense.

Well-written text very often trumps face-to-face communication. You can refer to well-written text later, instead of relying on your memory. You can't produce well-written text unless you think carefully. Also, technically speaking, you can literally never produce good code in the first place unless you produce well-written text. There are several great presentations from GitHub on the value of asynchronous communication, and they're basically required viewing for anybody who wants to work as a programmer, or with programmers.

In fact, GitHub itself was built without face-to-face communication. Basecamp was built without face-to-face communication as well. I'm not saying these people never met each other, but most of the work was done remote. Industrial designer Marc Newson works remote for Apple, so his work on the Apple Watch may also have happened without face-to-face communication. And face-to-face communication plays a minimal role in the majority of open source projects, which usually outperform commercial projects in terms of software quality.

In addition to defying logic and available evidence, both these Agile Manifesto principles encourage a kind of babysitting mentality. I've never seen Scrum-like frameworks for transmuting the work of designers, marketers, or accountants into cartoonish oversimplifications like story points. People are happy to treat these workers as adults and trust them to do their jobs.

I don't know why this same trust does not prevail in the culture of managing programmers. That's a question for another blog post. I suspect that the reasons are historical, and fundamentally irrelevant, because it really doesn't matter. If you're not doing well at hiring engineers, the answer is not a deeply flawed methodology which collapses under the weight of its own contradictions on a regular basis. The answer is to get better at hiring engineers, and ultimately to get great at it.

I may do a future blog post on this, because it's one of the most valuable skills in the world.

Credit where credit's due: the Agile Manifesto helped usher in a vital paradigm shift, in its day.

by Giles Bowkett ( at September 20, 2014 06:56 PM

Decyphering Glyph


I am not an engineer.

I am a computer programmer. I am a software developer. I am a software author. I am a coder.

I program computers. I develop software. I write software. I code.

I’d prefer that you not refer to me as an engineer, but this is not an essay about how I’m going to heap scorn upon you if you do so. Sometimes, I myself slip and use the word “engineering” to refer to this activity that I perform. Sometimes I use the word “engineer” to refer to myself or my peers. It is, sadly, fairly conventional to refer to us as “engineers”, and avoiding this term in a context where it’s what everyone else uses is a constant challenge.

Nevertheless, I do not “engineer” software. Neither do you, because nobody has ever known enough about the process of creating software to “engineer” it.

According to, “engineering” is:

“the art or science of making practical application of the knowledge of pure sciences, as physics or chemistry, as in the construction of engines, bridges, buildings, mines, ships, and chemical plants.”

When writing software, we typically do not apply “knowledge of pure sciences”. Very little science is germane to the practical creation of software, and the places where it is relevant (firmware for hard disks, for example, or analytics for physical sensors) are highly rarified. The one thing that we might sometimes use called “science”, i.e. computer science, is a subdiscipline of mathematics, and not a science at all. Even computer science, though, is hardly ever brought to bear - if you’re a working programmer, what was the last project where you had to submit formal algorithmic analysis for any component of your system?

Wikipedia has a heaping helping of criticism of the terminology behind software engineering, but rather than focusing on that, let's see where Wikipedia tells us software engineering comes from in the first place:

The discipline of software engineering was created to address poor quality of software, get projects exceeding time and budget under control, and ensure that software is built systematically, rigorously, measurably, on time, on budget, and within specification. Engineering already addresses all these issues, hence the same principles used in engineering can be applied to software.

Most software projects fail; as of 2009, 44% are late, over budget, or out of specification, and an additional 24% are cancelled entirely. Only a third of projects succeed according to those criteria of being under budget, within specification, and complete.

What would that look like if another engineering discipline had that sort of hit rate? Consider civil engineering. Would you want to live in a city where almost a quarter of all the buildings were simply abandoned half-constructed, or fell down during construction? Where almost half of the buildings were missing floors, had rents in the millions of dollars, or both?

My point is not that the software industry is awful. It certainly can be, at times, but it’s not nearly as grim as the metaphor of civil engineering might suggest. Consider this: despite the statistics above, is using a computer today really like wandering through a crumbling city where a collapsing building might kill you at any moment? No! The social and economic costs of these “failures” is far lower than most process consultants would have you believe. In fact, the cause of many such “failures” is a clumsy, ham-fisted attempt to apply engineering-style budgetary and schedule constraints to a process that looks nothing whatsoever like engineering. I have to use scare quotes around “failure” because many of these projects classified as failed have actually delivered significant value. For example, if the initial specification for a project is overambitious due to lack of information about the difficulty of the tasks involved, for example – an extremely common problem at the beginning of a software project – that would still be a failure according to the metric of “within specification”, but it’s a problem with the specification and not the software.

Certain missteps notwithstanding, most of the progress in software development process improvement in the last couple of decades has been in acknowledging that it can’t really be planned very far in advance. Software vendors now have to constantly present works in progress to their customers, because the longer they go without doing that there is an increasing risk that the software will not meet the somewhat arbitrary goals for being “finished”, and may never be presented to customers at all.

The idea that we should not call ourselves “engineers” is not a new one. It is a minority view, but I’m in good company in that minority. Edsger W. Dijkstra points out that software presents what he calls “radical novelty” - it is too different from all the other types of things that have come before to try to construct it by analogy to those things.

One of the ways in which writing software is different from engineering is the matter of raw materials. Skyscrapers and bridges are made of steel and concrete, but software is made out of feelings. Physical construction projects can be made predictable because the part where creative people are creating the designs - the part of that process most analagous to software - is a small fraction of the time required to create the artifact itself.

Therefore, in order to create software you have to have an “engineering” process that puts its focus primarily upon the psychological issue of making your raw materials - the brains inside the human beings you have acquired for the purpose of software manufacturing - happy, so that they may be efficiently utilized. This is not a common feature of other engineering disciplines.

The process of managing the author’s feelings is a lot more like what an editor does when “constructing” a novel than what a foreperson does when constructing a bridge. In my mind, that is what we should be studying, and modeling, when trying to construct large and complex software systems.

Consequently, not only am I not an engineer, I do not aspire to be an engineer, either. I do not think that it is worthwhile to aspire to the standards of another entirely disparate profession.

This doesn’t mean we shouldn’t measure things, or have quality standards, or try to agree on best practices. We should, by all means, have these things, but we authors of software should construct them in ways that make sense for the specific details of the software development process.

While we are on the subject of things that we are not, I’m also not a maker. I don’t make things. We don’t talk about “building” novels, or “constructing” music, nor should we talk about “building” and “assembling” software. I like software specifically because of all the ways in which it is not like “making” stuff. Making stuff is messy, and hard, and involves making lots of mistakes.

I love how software is ethereal, and mistakes are cheap and reversible, and I don’t have any desire to make it more physical and permanent. When I hear other developers use this language to talk about software, it makes me think that they envy something about physical stuff, and wish that they were doing some kind of construction or factory-design project instead of making an application.

The way we use language affects the way we think. When we use terms like “engineer” and “builder” to describe ourselves as creators, developers, maintainers, and writers of software, we are defining our role by analogy and in reference to other, dissimilar fields.

Right now, I think I prefer the term “developer”, since the verb develop captures both the incremental creation and ongoing maintenance of software, which is so much a part of any long-term work in the field. The only disadvantage of this term seems to be that people occasionally think I do something with apartment buildings, so I am careful to always put the word “software” first.

If you work on software, whichever particular phrasing you prefer, pick one that really calls to mind what software means to you, and don’t get stuck in a tedious metaphor about building bridges or cars or factories or whatever.

To paraphrase a wise man:

I am developer, and so can you.

by Glyph at September 20, 2014 05:00 AM

September 18, 2014

Simplest Thing (Bill Seitz)

Start a Movement! Lead a Tribe! – But Only as a Last Resort | OnTheSpiral

Start a Movement! Lead a Tribe! – But Only as a Last Resort | OnTheSpiral:

Gregory Rader on the danger of jumping into starting a movement/tribe without actually having a solution to spread…

September 18, 2014 02:59 PM

September 17, 2014

Decyphering Glyph

The Most Important Thing You Will Read On This Blog All Year

Hash: SHA512

I have two PGP keys.

One, 16F13480, is signed by many people in the open source community.  It is a
4096-bit RSA key.

The other, 0FBC4A07, is superficially worse.  It doesn't have any signatures on
it.  It is only a 3072-bit RSA key.

However, I would prefer that you all use 0FBC4A07.

16F13480 lives encrypted on disk, and occasionally resident in memory on my
personal laptop.  I have had no compromises that I'm aware of, so I'm not
revoking the key - I don't want to lose all the wonderful trust I build up.  In
order to avoid compromising it in the future, I would really prefer to avoid
decrypting it any more often than necessary.

By contrast, aside from backups which I have not yet once had occasion to
access, 0FBC4A07 exists only on an OpenPGP smart card, it requires a PIN, it is
never memory resident on a general purpose computer, and is only plugged in
when I'm actively Doing Important Crypto Stuff.  Its likelyhood of future
compromise is *extremely* low.

If said smart card had supported 4096-bit keys I probably would have just put
the old key on the more secure hardware and called it a day.  Sadly, that is
not the world we live in.

Here's what I'd like you to do, if you wish to interact with me via GnuPG:

    $ gpg --recv-keys 0FBC4A07 16F13480
    gpg: requesting key 0FBC4A07 from hkp server
    gpg: requesting key 16F13480 from hkp server
    gpg: key 0FBC4A07: "Matthew "Glyph" Lefkowitz (OpenPGP Smart Card) <>" 1 new signature
    gpg: key 16F13480: "Matthew Lefkowitz (Glyph) <>" not changed
    gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
    gpg: depth: 0  valid:   2  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 2u
    gpg: next trustdb check due at 2015-08-18
    gpg: Total number processed: 2
    gpg:              unchanged: 1
    gpg:         new signatures: 1
    $ gpg --edit-key 16F13480
    gpg (GnuPG/MacGPG2) 2.0.22; Copyright (C) 2013 Free Software Foundation, Inc.
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.

    gpg: checking the trustdb
    gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
    gpg: depth: 0  valid:   2  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 2u
    gpg: next trustdb check due at 2015-08-18
    pub  4096R/16F13480  created: 2012-11-16  expires: 2016-04-12  usage: SC
                         trust: unknown       validity: unknown
    sub  4096R/0F3F064E  created: 2012-11-16  expires: 2016-04-12  usage: E
    [ unknown] (1). Matthew Lefkowitz (Glyph) <>

    gpg> disable

    gpg> save
    Key not changed so no update needed.

If you're using keybase, "keybase encrypt glyph" should be pointed at the
correct key.

Thanks for reading,

- -glyph
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)


by Glyph at September 17, 2014 05:14 AM

September 15, 2014

Decyphering Glyph

The Horizon

Sometimes, sea sickness is caused by a sort of confusion. Your inner ear can feel the motion of the world around it, but because it can’t see the outside world, it can’t reconcile its visual input with its proprioceptive input, and the result is nausea.

This is why it helps to see the horizon. If you can see the horizon, your eyes will talk to your inner ears by way of your brain, they will realize that everything is where it should be, and that everything will be OK.

As a result, you’ll stop feeling sick.

photo credit:

I have a sort of motion sickness too, but it’s not seasickness. Luckily, I do not experience nausea due to moving through space. Instead, I have a sort of temporal motion sickness. I feel ill, and I can’t get anything done, when I can’t see the time horizon.

I think I’m going to need to explain that a bit, since I don’t mean the end of the current fiscal quarter. I realize this doesn’t make sense yet. I hope it will, after a bit of an explanation. Please bear with me.

Time management gurus often advise that it is “good” to break up large, daunting tasks into small, achievable chunks. Similarly, one has to schedule one’s day into dedicated portions of time where one can concentrate on specific tasks. This appears to be common sense. The most common enemy of productivity is procrastination. Procrastination happens when you are working on the wrong thing instead of the right thing. If you consciously and explicitly allocate time to the right thing, then chances are you will work on the right thing. Problem solved, right?

Except, that’s not quite how work, especially creative work, works.

ceci n’est pas un task

I try to be “good”. I try to classify all of my tasks. I put time on my calendar to get them done. I live inside little boxes of time that tell me what I need to do next. Sometimes, it even works. But more often than not, the little box on my calendar feels like a little cage. I am inexplicably, involuntarily, haunted by disruptive visions of the future that happen when that box ends.

Let me give you an example.

Let’s say it’s 9AM Monday morning and I have just arrived at work. I can see that at 2:30PM, I have a brief tax-related appointment. The hypothetical person doing my hypothetical taxes has an office that is a 25 minute hypothetical walk away, so I will need to leave work at 2PM sharp in order to get there on time. The appointment will last only 15 minutes, since I just need to sign some papers, and then I will return to work. With a 25 minute return trip, I should be back in the office well before 4, leaving me plenty of time to deal with any peripheral tasks before I need to leave at 5:30. Aside from an hour break at noon for lunch, I anticipate no other distractions during the day, so I have a solid 3 hour chunk to focus on my current project in the morning, an hour from 1 to 2, and an hour and a half from 4 to 5:30. Not an ideal day, certainly, but I have plenty of time to get work done.

The problem is, as I sit down in front of my nice, clean, empty text editor to sketch out my excellent programming ideas with that 3-hour chunk of time, I will immediately start thinking about how annoyed I am that I’m going to get interrupted in 5 and a half hours. It consumes my thoughts. It annoys me. I unconsciously attempt to soothe myself by checking email and getting to a nice, refreshing inbox zero. Now it’s 9:45. Well, at least my email is done. Time to really get to work. But now I only have 2 hours and 15 minutes, which is not as nice of an uninterrupted chunk of time for a deep coding task. Now I’m even more annoyed. I glare at the empty window on my screen. It glares back. I spend 20 useless minutes doing this, then take a 10-minute coffee break to try to re-set and focus on the problem, and not this silly tax meeting. Why couldn’t they just mail me the documents? Now it’s 10:15, and I still haven’t gotten anything done.

By 10:45, I manage to crank out a couple of lines of code, but the fact that I’m going to be wasting a whole hour with all that walking there and walking back just gnaws at me, and I’m slogging through individual test-cases, mechanically filling docstrings for the new API and for the tests, and not really able to synthesize a coherent, holistic solution to the overall problem I’m working on. Oh well. It feels like progress, albeit slow, and some days you just have to play through the pain. I struggle until 11:30 at which point I notice that since I haven’t been able to really think about the big picture, most of the test cases I’ve written are going to be invalidated by an API change I need to make, so almost all of the morning’s work is useless. Damn it, it’s 2014, I should be able to just fill out the forms online or something, having to physically carry an envelope with paper in it ten blocks is just ridiculous. Maybe I could get my falcon to deliver it for me.

It’s 11:45 now, so I’m not going to get anything useful done before lunch. I listlessly click on stuff on my screen and try to relax by thinking about my totally rad falcon until it’s time to go. As I get up, I glance at my phone and see the reminder for the tax appointment.

Wait a second.

The appointment has today’s date, but the subject says “2013”. This was just some mistaken data-entry in my calendar from last year! I don’t have an appointment today! I have nowhere to be all afternoon.

For pointless anxiety over this fake chore which never even actually happened, a real morning was ruined. Well, a hypothetical real morning; I have never actually needed to interrupt a work day to walk anywhere to sign tax paperwork. But you get the idea.

To a lesser extent, upcoming events later in the week, month, or even year bother me. But the worst is when I know that I have only 45 minutes to get into a task, and I have another task booked up right against it. All this trying to get organized, all this carving out uninterrupted time on my calendar, all of this trying to manage all of my creative energies and marshal them at specific times for specific tasks, annihilates itself when I start thinking about how I am eventually going to have to stop working on the seemingly endless, sprawling problem set before me.

The horizon I need to see is the infinite time available before me to do all the thinking I need to do to solve whatever problem has been set before me. If I want to write a paragraph of an essay, I need to see enough time to write the whole thing.

Sometimes - maybe even, if I’m lucky, increasingly frequently - I manage to fool myself. I hide my calendar, close my eyes, and imagine an undisturbed millennium in front of my text editor ... during which I may address some nonsense problem with malformed utf-7 in mime headers.

... during which I can complete a long and detailed email about process enhancements in open source.

... during which I can write a lengthy blog post about my productivity-related neuroses.

I imagine that I can see all the way to the distant horizon at the end of time, and that there is nothing between me and it except dedicated, meditative concentration.

That is on a good day. On a bad day, trying to hide from this anxiety manifests itself in peculiar and not particularly healthy ways. For one thing, I avoid sleep. One way I can always extend the current block of time allocated to my current activity is by just staying awake a little longer. I know this is basically the wrong way to go about it. I know that it’s bad for me, and that it is almost immediately counterproductive. I know that ... but it’s almost 1AM and I’m still typing. If I weren’t still typing right now, instead of sleeping, this post would never get finished, because I’ve spent far too many evenings looking at the unfinished, incoherent draft of it and saying to myself, "Sure, I’d love to work on it, but I have a dentist’s appointment in six months and that is going to be super distracting; I might as well not get started".

Much has been written about the deleterious effects of interrupting creative thinking. But what interrupts me isn’t an interruption; what distracts me isn’t even a distraction. The idea of a distraction is distracting; the specter of a future interruption interrupts me.

This is the part of the article where I wow you with my life hack, right? Where I reveal the one weird trick that will make productivity gurus hate you? Where you won’t believe what happens next?

Believe it: the surprise here is that this is not a set-up for some choice productivity wisdom or a sales set-up for my new book. I have no idea how to solve this problem. The best I can do is that thing I said above about closing my eyes, pretending, and concentrating. Honestly, I have no idea even if anyone else suffers from this, or if it’s a unique neurosis. If a reader would be interested in letting me know about their own experiences, I might update this article to share some ideas, but for now it is mostly about sharing my own vulnerability and not about any particular solution.

I can share one lesson, though. The one thing that this peculiar anxiety has taught me is that productivity “rules” are not revealed divine truth. They are ideas, and those ideas have to be evaluated exclusively on the basis of their efficacy, which is to say, on the basis of how much stuff that you want to get done that they help you get done.

For now, what I’m trying to do is to un-clench the fearful, spasmodic fist in my mind that gripping the need to schedule everything into these small boxes and allocate only the time that I “need” to get something specific done.

Maybe the only way I am going to achieve anything of significance is with opaque, 8-hour slabs of time with no more specific goal than “write some words, maybe a blog post, maybe some fiction, who knows” and “do something work-related”. As someone constantly struggling to force my own fickle mind to accomplish any small part of my ridiculously ambitions creative agenda, it’s really hard to relax and let go of anything which might help, which might get me a little further a little faster.

Maybe I should be trying to schedule my time into tiny little blocks. Maybe I’m just doing it wrong somehow and I just need to be harder on myself, madder at myself, and I really can get the blood out of this particular stone.

Maybe it doesn’t matter all that much how I schedule my own time because there’s always some upcoming distraction that I can’t control, and I just need to get better at meditating and somehow putting them out of my mind without really changing what goes on my calendar.

Maybe this is just as productive as I get, and I’ll still be fighting this particular fight with myself when I’m 80.

Regardless, I think it’s time to let go of that fear and try something a little different.

by Glyph at September 15, 2014 08:27 AM

Ian Bicking

Professional Transitions (or: the shutting down of Mozilla Labs)

Since my last post about leaving Python, my career has shifted further.

Earlier this year Mozilla shut down its Labs group. It’s a little hard to tell – I guess we didn’t actually shutter anything, and though it was announced internally it is entirely unclear externally. But Mozilla Labs is definitely shut down. Most of those who were still part of Labs at the end moved to the Mozilla Foundation (the non-profit foundation that owns Mozilla Corporation – taxes are complicated). Aaron, my partner in building TogetherJS and Hotdish, was left in a reorganization limbo, and in the process we lost him to Google. With the closing of Labs I was also left in a limbo, but with a baby on the way (Willa Blue Murphy Bicking, born April 23rd 2014) I wasn’t looking forward to any big changes.

On The Closing Of Mozilla Labs

As inopportune as the timing of this was, I understand the reason for closing Mozilla Labs. I don’t believe Labs was effective. And I was not effective in it.

Note that Mozilla Labs is distinct from Mozilla Research – Research is the home of projects like Rust and ASM.js. Mozilla Research is still going strong. To make a broad distinction between the two groups: Research has worked on foundational technologies for the web, especially related to programming languages, while Labs was product-focused. Also Research has been led by Brendan Eich and now David Herman, with what appears to be a fairly clear vision and succession. Labs was led by a number of people with different interests and different visions – some people with an eye to external validation, some looking to spur disruptive (also uncomfortable) changes in Mozilla, some hoping to enable and include external contribution.

When I first started writing this I felt that a leadership misdirection was at the root of Labs’ missteps. But leadership fetish feels a lot like a Great Man theory, an expectation that we are doomed or blessed only by the wisdom and fortitude of our leaders. These shifting priorities from our leadership may have been disruptive to the degree our cultural priorities and understandings (that is, the understanding held collectively by the group) did not themselves provide an even enough keel.

That said, management is a group’s conduit to the larger organization. Individuals can reach out, but it’s harder, and the information they acquire doesn’t necessarily percolate well through the group. Advocacy by individual contributors is also often misdirected, chasing after support where there is no real potential. For any identified problem you can picture an imaginary perfect leader who will fix it, but imagining heroes isn’t the same thing as planning for a strong organization.

Launching a project from Labs did not seem particularly successful. Firefox Sync came out early, as did Jetpack (now the Addon-SDK). Open Web Apps ultimately subsumed Labs. Tab Candy/Panorama/Tab Groups got into Firefox but that was the end of it. Persona became an independent group and team, but didn’t really succeed. Social API is not gone, but it also hasn’t found a real home in Mozilla. Many things people worked on in Labs now exist, but not because of Labs – often a project only took off when someone else in Mozilla committed to it. Other projects still suffer from how they were birthed in Labs.

Labs” groups are often criticized for being too separate from the companies they belong to. In Mozilla this was a problem because we had a hard time getting things done – for instance, if the success of a project depended on changes to Firefox, then it was hard to get those prioritized. By working separately we also would often use patterns that weren’t liked by other people. People in Labs would often come from the perspective of web developers, where much of Mozilla is focused on user agents, and this is a much bigger divide than I initially appreciated. But the technical problems were perhaps a symptom: integration with the rest of Mozilla was viewed as a problem, a late-stage effort, not part of the exploration and experiment itself.

Another criticism of “Labs” is that innovation comes from everywhere, and having some kind of Innovation Group is exclusionary and misses the opportunity that the larger company and community represents. At times Labs tried to be inclusionary – collecting ideas from the community, trying to guide some more inclusive design processes, played around with internal sabbaticals. I fear these mostly led people on, it did not lead to creative engagement with the community.

Still I’m not entirely pessimistic about the idea of a Labs-like group. Maybe I’d want to call it a new product incubation group (using a wide definition of “product”). Having a separate group does present challenges. But established groups tend not to be conservative about their scope, there’s always more core priorities to be addressed. Establishing a separate group creates an investment that won’t be redirected. But as I write this down this seems like an inferior way to keep investment balanced, it would be better to protect innovation within existing product groups. I see some groups accomplish that protection and I see many groups that do not, but I don’t yet know what creates that difference.

TogetherJS and Hotdish

While unexpectedly abrupt, it was not a surprise to us that Labs shut down. We certainly had every expectation that TogetherJS as a project would be shut down. It may have only gotten by as long as it did because no one was clear who had responsibility for shutting it down.

And again, we didn’t disagree with the motivations that would have it shut down. TogetherJS didn’t have a strategic tie-in to Mozilla. It neither built on Mozilla’s strengths nor did it strengthen Mozilla.

There’s this “failure is okay” meme among the Innovation Crowd. Certainly we talked about it in Labs as well. Fail fast; try another experiment; repeat. I think this is bullshit. Failure isn’t okay, we don’t achieve things through Brownian motion and a selection function. There are many kinds returns on an investment, and if you focus on only one kind of return, only one definition of success, then most experiments will appear to fail. If you want to encourage innovation you have to create a more inclusive definition of “success”. Success can mean that you learn something. It can mean getting rid of a biased preconception about a solution. It can mean learning about and understanding a domain or approach. It can mean adapting processes for future improvement. If something looks like an abject failure, I still believe if you look more closely and thoughtfully you can learn from it and come up with another approach, another experiment or if necessary a meta-experiment. You should be ready to pivot at any time, but thoughtfully.

To innovate we need to learn from every experiment. We need to actively learn during experiments. We need to learn about the projects – but we also need to learn about the environment the innovation is embedded in. Understanding the tensions, fears, needs, and excitement of the larger Mozilla needs to be part of that learning process.

I think the inability to pivot didn’t just keep us from succeeding a projects, but also kept the successes from their full potential. For instance “Apps” was a going concern in Labs when I started, and ultimately would become the group that would contain and then close Labs. But to me it always felt like a project built on willfulness rather than inspiration. It’s only in the context of Firefox OS — where the OS makes demands on apps — that I think Apps are becoming something meaningful, and iterating towards something more than a trivial imitation of native apps. But when you treat success as a binary then you are forced to push through your plan even if it could be improved by rethinking the approach, because rethinking feels like failure.

We knew TogetherJS wasn’t a success for Mozilla (at least the Mozilla Corporation). But we did learn things. Hotdish was our pivot – our attempt to rethink the concepts in a form that was strategically meaningful to Mozilla. Specifically adding collaboration to the User Agent (i.e., Firefox), not to a web page. But the pivot was hopeless at that point, events overtook us. Or maybe not – I’ve been caught up in success through execution for too long, and missing out on opportunities, so I don’t trust those assessments of opportunities. But I think Hotdish as a concept was a bad match for Mozilla at that moment – Mozilla is currently under stress, it’s looking for near-term wins, and Hotdish is a long-term concept.

Still I’m frustrated. I feel like the kind of approach to experimentation that we were developing, through both TogetherJS and Hotdish, was what Labs should have been doing all along, like I’d finally started figuring it out. We were critically engaged with a product-oriented view, something I really learned with TogetherJS, from working with Aaron, and also from Simon and Gregg. We were starting to engage (on our end) with Mozilla’s strengths and specific opportunities (we had not yet started to engage outwardly in this respect). We still needed to engage more aggressively on a technical level with the larger organization. But it really felt like we were developing patterns by which a small team could effectively explore new areas, where with a conservative investment we could really push in new directions.

But now, with respect to these specific projects, I don’t know — I am still excited about their potential, but also feel like I need to establish emotional distance. TogetherJS has been in a bit of a limbo as a result, but Mozilla Foundation is planning to use TogetherJS more extensively in 2015, hopefully spurring another stage of development and use. And a helpful contributor has emerged lately, but I know it’s a hard slog to take on a project in this state.

Some of what I’ve learned

I learned a couple things about myself from my last experiences in Labs:

  1. Being on a good team is great. Good people alone don’t make a good team. I hadn’t been on a good team at Mozilla before, or I hadn’t been a good team member, I’m not sure which. Either way I want to take that on myself, not blame others, as my real goal is to find my own positive path.

  2. Working with people with a variety of skills and perspectives is great, if they are critically and positively engaged. They must be critically engaged because happy shallow input doesn’t have much value, and often covers up high-value input. They must be positively engaged because negative input is cheap and unhelpful. A scattershot of perspectives and motivations isn’t good, people talk across each other, discussions are drawn to problem collecting instead of solution finding. Getting really smart people together doesn’t inevitably lead to positive searching.

  3. I shouldn’t expect success or impact through only my own abilities. I’ve had a lone coder approach for a long long time now. I’m a pretty good lone coder. I’ve had this notion in my mind of the Ultimate Product, some great thing I’m going to come up with and then make in some series of late-night coding frenzies, and it’ll be amazing. It’s a self-acknowledged fantasy, I don’t really believe I’ll do this, but it’s still a real motivation and thwarts my realistic ambitions. This perspective might seem conceited, but in practice it’s an incredible drain on my self-esteem. I would have to withdraw to create software of my own invention just to maintain some modicum of self-respect.

  4. I should stop making assumptions about other people’s intentions, interests, or disinterests. It’s entirely unnecessary. All I have to do is ask.

  5. Success (or effectiveness or impact) happens in a context, it’s not an independent thing. It happens within the team, the company, the community, and the larger world. But I shouldn’t just avoid being a loner: skipping all the most nearby contexts and paying attention only to the zeitgeist also isn’t effective. This was a chronic problem in Labs. I also sometimes wonder if it’s in the nature of The Valley.

Then what…?

So now I’m in a new position, Engineering Manager. I’m working in the Cloud Services group, and my team is focused on providing services for Firefox OS. I have not touched code since I started. In the parlance of management, I am a “people manager”.

This is not a move I expected to make. Like many a developer (especially open source developers) I have always eschewed these formal lines of authority. I might have imagined myself in a tech lead position, but not management. More so the authority I now have was granted to me by Mozilla, not exactly earned — this was a lateral move, not a promotion within a team. And the role I play is not implementor but facilitator. But here I am.

I’ve spent a lot of time exploring how to do things, and now I’m looking forward to spending some time exploring how to get things done. This is itself not an easy bridge to cross when holding on to open source attitudes — we’ve created this incredible set of tools, but very few products. (Even Firefox was in some sense really created by Netscape.) And so I find myself looking more to traditional methods, to the processes and perspectives of the commercial world.

I’m also just trying to learn to be a manager, a craft of its own, and a role I do not take lightly. The pacing of my day is now much different. Large chunks of free time are not my most productive, instead I find momentum through appointments and my to-do list. I’m not even sure “flow” means something to a manager. I’m practicing a discipline that before I had always seen as overhead — I saw the clerical work, the communication work, the reponsiveness as distractions that keep me away from my “real work”. That attitude was hubris on my part, but there it was.

How long I’m in this, I don’t know. I have a lot more to learn, I won’t run out of important lessons anytime soon. And I have commitments to follow through on, things I set in motion take time to resolve, it take time to even know how well I’m doing. And I don’t dislike the work. I’m surprisingly relaxed, more so than I’ve been for a long time in my professional life (though it’s had its ups and downs). And while I do think fondly to times when I was less responsible, the reality is in my life I am now responsible for many things, the most important of them aren’t part of work, and I can never be carefree in the way I was when I was younger.

I do worry my technical experience will atrophy. I actually feel confident understanding things in theory, not always through direct experience. I don’t exactly need skills to contribute to my team. I think what I really need is intuition. It is my role to infer things about a project, so that I can ask questions, suggest improvements, detect risks, suggest alternatives. But even as I write this down it makes me worry less; if I was to waterfall all my insights upon the organization or team then my knowledge would atrophy, but each suggestion is also a chance to learn so long as I listen.

It may, or may not, be obvious at this point why I haven’t coded since I started. Programming is comfortable, it satisfies something in me — and that thing is not what I need to grow in myself right now. What I’m choosing to do is uncomfortable for me, and I feel a need to withhold a comfort I do not trust.

I am feeling a certain fatigue. I spend more time now worrying about what I don’t know I should worry about. Mozilla is not huge, but it is not small, and I struggle trying to understand where in the organization there is flexibility, tension, what parts are locked down, what parts already want to change. I see the appeal to greenfield organizational development (aka the startup).

A growing set of principles

In my time so far I’ve thought about what kind of manager I try to be. Maybe sometimes I live up to these principles, but I’m also comfortable thinking of these as aspirational.

  1. My primary concern is to help the members of my team do their most impactful work. This is not the same as their best work. Over time I am seeing that these are greatly different, and I think the open source world is almost built on “best” over “impactful”, so this prioritization is important and somewhat contrary.

  2. I always want there to be space. Space to discover, space to learn, space to make mistakes, and space to learn from those mistakes (the space to learn is so often forgotten). Also space in conversations, and space in meetings. I enjoy long pauses in conversations or meetings, I think they can be a good sign.

  3. It’s hard to think hard, especially on demand. It’s hard for me too. Intuition or convention is much easier. But at some points it is essential that we think hard. I’ve yet to figure out how to ensure that happens at those critical times.

  4. I may be called on to judge the members of my team, but that is not my job. Though the article is about teaching, I found myself quite affected by The Lesson of Grace in Teaching. I fear when I have the opportunity to offer my team grace that I will forget this, miss the opportunity amid my own stress. But that is why I must remind myself like this.

  5. It’s always a good time to do the right thing. But we should do the right thing for right now, not for the past or for an imagined present. The Sunk Cost Falacy has two sides.

  6. I do not shield my team from the confusions or stresses of the larger organization. When I have asked people what their favorite experience with a manager was, I have often heard that they liked a manager that shields them from the larger stresses of the organization, a manager who gave them a singular clear vision. That management style creates a more comfortable environment, and one that feels safer. But as I said, I prioritize impact, and impact exists in the organizational context. I hope that my team can creatively respond and react to the organizational context. I don’t want to pass down my own stress, but insulating my team through a false confidence isn’t fair to them, even if it seems nice at the moment.

  7. Small decisions are important. And each person is making many small, important decisions throughout their work. That’s why I don’t want to insulate: I believe that those decisions, well made, have important impact. And I can’t predict, on behalf of my team, when those decisions will come up.

  8. I always want my team to understand why. Why are we doing this? People can execute instructions without understanding why. I can simply assign work. They can close bugs (then the bug tracker assigns work). But there’s an opportunity lost if you don’t understand motivations.

There’s a lot missing from this list that I have yet to learn. And probably some items that represent an incorrect perspective on my role. But I will keep making provisional theories, these form my next set of experiments.

I hope I can muster the will to write more about what I learn along the way.

by Ian Bicking at September 15, 2014 05:00 AM

September 08, 2014

Greg Linden

The problem with personalized education

Personalized education has had some spectacular failures lately, in large part due to how tone-deaf the backers have been to the needs of teachers, parents, and students.

The right way to do personalization is to prove you're useful first. Personalization is just a tool. If a new tool doesn't work better than the old tool, it's useless. There's no reason to use personalized education unless it works better than unpersonalized education. A tool needs to be useful.

Teachers are already overworked and, after having been burned too many times on supposedly exciting new technologies that fail to help, correctly are cynical about tech startups coming in and demanding something of them. If some tech startup isn't helping a teacher get something done they need to get done, it's a bad tool and it's useless.

Parents are leery of companies who say they only want to help and what corporations are doing with the data they have on their children, correctly so given all the marketing abuses that have happened in the past.

Kids don't want more boring busywork to do -- they get enough of that already -- and don't see why anything this company is talking about helps them or is useful to them.

If a company wants to succeed in personalized education, it should:
  1. Be useful, noticeably raise test scores
  2. Not require additional busy work
  3. Be optional
  4. Have no marketing whatsoever, only use data to help
I think there are plenty of examples of how this might work. I would like to see a company offer a free Duolingo-like pre-algebra and algebra app that jumps students ahead rapidly as they answer questions correctly and spends more time on similar problems after a question is wrong. The app would be completely optional for students to use, but, when students use it, their test scores increase.

I would like to see a company use the existing standardized tests required by several states, analyze the incorrect answers to identify concepts a student is not understanding, and then print short worksheets targeting only those missed concepts for teachers to hand out to each student. The worksheets would be free and arrive in teachers' mailboxes. If the teacher doesn't want to hand them out, that's not a problem, but test scores go up for the classrooms where the teachers do hand them out. So, even if most teachers don't hand them out at first and most students throw them away at first, over time, more and more teachers will start handing them out and more and more students will do them, as only helps those who do.

In both of these examples, a startup could set up from the beginning to run large scale experiments, showing different problems to different students, and learning what raises test scores, what designs and lesson lengths cause students to stop, what concepts are important and which matter less, what can be taught easily through this and what cannot, what people enjoy, and what works.

When a company comes in and says, "Give us your data, teachers, parents, and kids, and do all this work. Maybe we'll boost your test scores for you later," they're being arrogant and tone-deaf. Everyone responds, "I don't believe you. How about you prove you're useful first? I'm busy. Do something for me or go away." And they're right to do so.

There likely is a way to do personalized education that everyone would embrace. But that way probably requires proving you're useful first. After all, personalization is just a tool.

by Greg Linden ( at September 08, 2014 02:20 PM

September 04, 2014

Decyphering Glyph

Techniques for Actually Distributed Development with Git

The Setup

I have a lot of computers. Sometimes I'll start a project on one computer and get interrupted, then later find myself wanting to work on that same project, right where I left off, on another computer. Sometimes I'm not ready to publish my work, and I might still want to rebase it a few times (because if you're not arbitrarily rewriting history all the time you're not really using Git) so I don't want to push to @{upstream} (which is how you say "upstream" in Git).

I would like to be able to use Git to synchronize this work in progress between multiple computers. I would like to be able to easily automate this synchronization so that I don’t have to remember any special steps to sync up; one of the most focused times that I have available to get coding and writing work done is when I’m disconnected from the Internet, on a cross-country train or a plane trip, or sitting near a beach, for 5-10 hours. It is very frustrating to realize only once I’m settled in and unable to fetch the code, that I don’t actually have it on my laptop because I was last doing something on my desktop. I would particularly like to be able to that offline time to resolve tricky merge conflicts; if there are two versions of a feature, I want to have them both available locally while I'm disconnected.

Completely Fake, Made-Up History That Is A Lie

As everyone knows, Git is a centralized version control system created by the popular website GitHub as a command-line client for its "forking" HTML API. Alternate central Git authorities have been created by other startups following in the wave following GitHub's success, such as BitBucket and GitLab.

It may surprise many younger developers to know that when GitHub first created Git, it was originally intended to be a distributed version control system, where it was possible to share code with no particular central authority!

Although the feature has been carefully hidden from the casual user, with a bit of trickery you can enable re-enable it!

Technique 0: Understanding What's Going On

It's a bit confusing to have to actually set up multiple computers to test these things, so one useful thing to understand is that, to Git, the place you can fetch revisions from and push revisions to is a repository. Normally these are identified by URLs which identify hosts on the Internet, but you can also just indicate a path name on your computer. So for example, we can simulate a "network" of three computers with three clones of a repository like this:

$ mkdir tmp
$ cd tmp/
$ mkdir a b c
$ for repo in a b c; do (cd $repo; git init); done
Initialized empty Git repository in .../tmp/a/.git/
Initialized empty Git repository in .../tmp/b/.git/
Initialized empty Git repository in .../tmp/c/.git/

This creates three separate repositories. But since they're not clones of each other, none of them have any remotes, and none of them can push or pull from each other. So how do we teach them about each other?

$ cd a
$ git remote add b ../b
$ git remote add c ../c
$ cd ../b
$ git remote add a ../a
$ git remote add c ../c
$ cd ../c
$ git remote add a ../a
$ git remote add b ../b

Now, you can go into a and type git fetch --all and it will fetch from b and c, and similarly for git fetch --all in b and c.

To turn this into a practical multiple-machine scenario, rather than specifying a path like ../b, you would specify an SSH URL as your remote URL, and turn SSH on on each of your machines ("Remote Login" in the Sharing preference pane on the mac, if you aren't familiar with doing that on a mac).

So, for example, if you have a home desktop tweedledee and a work laptop tweedledum, you can do something like this:

tweedledee:~ neo$ mkdir foo; cd foo
tweedledee:foo neo$ git init .
# ...
tweedledum:~ m_anderson$ mkdir bar; cd bar
tweedledum:bar m_anderson$ git init .
tweedledum:bar m_anderson$ git remote add tweedledee neo@tweedledee.local:foo
# ...
tweedledee:foo neo$ git remote add tweedledum m_anderson$@tweedledum.local:foo

I don't know the names of the hosts on your network. So, in order to make it possible for you to follow along exactly, I'll use the repositories that I set up above, with path-based remotes, in the following examples.

Technique 1 (Simple): Always Only Fetch, Then Merge

Git repositories are pretty boring without any commits, so let's create a commit:

$ cd ../a
$ echo 'some data' > data.txt
$ git add data.txt
$ git ci -m "data"
[master (root-commit) 8dc3db4] data
 1 file changed, 1 insertion(+)
 create mode 100644 data.txt

Now on our "computers" b and c, we can easily retrieve this commit:

$ cd ../b/
$ git fetch --all
Fetching a
remote: Counting objects: 3, done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
From ../a
 * [new branch]      master     -> a/master
Fetching c
$ git merge a/master
$ ls
$ cd ../c
$ ls
$ git fetch --all
Fetching a
remote: Counting objects: 3, done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
From ../a
 * [new branch]      master     -> a/master
Fetching b
From ../b
 * [new branch]      master     -> b/master
$ git merge b/master
$ ls

If we make a change on b, we can easily pull it into a as well.

$ cd ../b
$ echo 'more data' > data.txt 
$ git commit data.txt -m "more data"
[master f3d4165] more data
 1 file changed, 1 insertion(+), 1 deletion(-)
$ cd ../a
$ git fetch --all
Fetching b
remote: Counting objects: 5, done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
From ../b
 * [new branch]      master     -> b/master
Fetching c
From ../c
 * [new branch]      master     -> c/master
$ git merge b/master
Updating 8dc3db4..f3d4165
 data.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

This technique is quite workable, except for one minor problem.

The Minor Problem

Let's say you're sitting on your laptop and your battery is about to die. You want to push some changes from your laptop to your desktop. Your SSH key, however, is plugged in to your laptop. So you just figure you'll push from your laptop to your desktop. Unfortunately, if you try, you'll see something like this:

$ cd ../a/
$ echo 'even more data' >> data.txt 
$ git commit data.txt -m "even more data"
[master a9f3d89] even more data
 1 file changed, 1 insertion(+)
$ git push b master
Counting objects: 7, done.
Writing objects: 100% (3/3), 260 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
remote: error: refusing to update checked out branch: refs/heads/master
remote: error: By default, updating the current branch in a non-bare repository
remote: error: is denied, because it will make the index and work tree inconsistent
remote: error: with what you pushed, and will require 'git reset --hard' to match
remote: error: the work tree to HEAD.
remote: error: 
remote: error: You can set 'receive.denyCurrentBranch' configuration variable to
remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into
remote: error: its current branch; however, this is not recommended unless you
remote: error: arranged to update its work tree to match what you pushed in some
remote: error: other way.
remote: error: 
remote: error: To squelch this message and still keep the default behaviour, set
remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'.
To ../b
 ! [remote rejected] master -> master (branch is currently checked out)
error: failed to push some refs to '../b'

While you're reading this, you fail your will save and become too bored to look at a computer any more.

Too late, your battery is dead! Hopefully you didn't lose any work.

In other words: sometimes it's nice to be able to push changes as well.

Technique 1.1: The Manual Workaround

The problem that you're facing here is that b has its master branch checked out, and is therefore rejecting changes to that branch. Your commits have actually all been "uploaded" to b, and are present in that repository, but there is no branch pointing to them. Doing either of those configuration things that Git warns you about in order to force it to allow it is a bad idea, though; if your working tree and your index and your your commits don't agree with each other, you're just asking for trouble. Git is confusing enough as it is.

In order work around this, you can just push your changes in master on a to a diferent branch on b, and then merge it later, like so:

$ git push b master:master_from_a
Counting objects: 7, done.
Writing objects: 100% (3/3), 260 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To ../b
 * [new branch]      master -> master_from_a
$ cd ../b
$ git merge master_from_a 
Updating f3d4165..a9f3d89
 data.txt | 1 +
 1 file changed, 1 insertion(+)

This works just fine, you just always have to manually remember which branches you want to push, and where, and where you're pushing from.

Technique 2: Push To Reverse Pull To Remote Remotes

The astute reader will have noticed at this point that git already has a way of tracking "other places that changes came from", they're called remotes! And in fact b already has a remote called a pointing at ../a. Once you're sitting in front of your b computer again, wouldn't you rather just have those changes already in the a remote, instead of in some branch you have to specifically look for?

What if you could just push your branches from a into that remote? Well, friend, I'm here today to tell you you can.

First, head back over to a...

$ cd ../a

And now, all you need is this entirely straightforward and obvious command:

$ git config remote.b.push '+refs/heads/*:refs/remotes/a/*'

and now, when you git push b from a, you will push those branches into b's "a" remote, as if you had done git fetch a while in b.

$ git push b
Total 0 (delta 0), reused 0 (delta 0)
To ../b
   8dc3db4..a9f3d89  master -> a/master

So, if we make more changes:

$ echo 'YET MORE data' >> data.txt
$ git commit data.txt -m "You get the idea."
[master c641a41] You get the idea.
 1 file changed, 1 insertion(+)

we can push them to b...

$ git push b
Counting objects: 5, done.
Writing objects: 100% (3/3), 272 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To ../b
   a9f3d89..c641a41  master -> a/master

and when we return to b...

$ cd ../b/

there's nothing to fetch, it's all been pre-fetched already, so

$ git fetch a

produces no output.

But there is some stuff to merge, so if we took b on a plane with us:

$ git merge a/master
Updating a9f3d89..c641a41
 data.txt | 1 +
 1 file changed, 1 insertion(+)

we can merge those changes in whenever we please!

More importantly, unlike the manual-syncing solution, this allows us to push multiple branches on a to b without worrying about conflicts, since the a remote on b will always only be updated to reflect the present state of a and should therefore never have conflicts (and if it does, it's because you rewrote history and you should be able to force push with no particular repercussions).


Here's a shell function which takes 2 parameters, "here" and "there". "here" is the name of the current repository - meaning, the name of the remote in the other repository that refers to this one - and "there" is the name of the remote which refers to another repository.

function remoteremote () {
    local here="$1"; shift;
    local there="$1"; shift;

    git config "remote.$there.push" "+refs/heads/*:refs/remotes/$here/*";

In the above example, we could have used this shell function like so:

$ cd ../a
$ remoteremote a b

I now use this all the time when I check out a repository on multiple machines for the first time; I can then always easily push my code to whatever machine I’m going to be using next.

I really hope at least one person out there has equally bizarre usage patterns of version control systems and finds this post useful. Let me know!


Thanks very much to Tom Prince for the information that lead to this post being worth sharing, and Jenn Schiffer for teaching me that it is OK to write jokes sometimes, except about javascript which is very serious and should not be made fun of ever.

by Glyph at September 04, 2014 05:44 AM

September 03, 2014

Greg Linden

More quick links

More of what caught my attention lately:
  • The overwhelming majority of smartphone users set up their phone once, then barely ever download a new app again ([1] [2])

  • Cool and successful use of speculative execution in cloud computing for games, trading off extra CPU and bandwidth for the ability to hide network latency ([1])

  • Infrared vision on your phone ([1] [2])

  • How easy is it to get people to memorize hard-to-crack random 56-bit passwords, equivalent to about 12 random letters or 6 words? ([1] [2])

  • Desalination needs warm water, data centers need to be cooled, why not put them together? Clever idea. ([1])

  • It's easy to overhype this, but it's still pretty cool, transmitting data (0 and 1 bits) directly brain-to-brain without implants (using magnetic stimulation of the brain and EEG reading of the brain, both from the surface of the scalp) with relatively low error rates (5-15%). Data rates are extremely low at 2-3 bits/minute, but it's still interesting that it's possible at all. ([1])

  • Xiaomi's remarkable iPhone clone ([1])

  • Has Amazon sold less than 35k Fire phones? ([1] [2])

  • Facebook publishes a paper which details how its ad targeting works and suggests they will be doing more personalization in the future ([1] [2])

  • "Having a multiyear project with no checks along the way and the promise of one big outcome is not a highly successful approach, in or outside government" ([1] [2])

  • More evidence patent trolls cause real harm. Trolled firms "dramatically reduce R&D spending". ([1])

  • "Using nothing more than a laptop ... [they could] alter the normal timing pattern of the [traffic] lights, turning all the lights along a given route green, for instance, or freezing an intersection with all reds" ([1])

  • Interesting data visualization showing how CD took over in music sales, then got replaced by downloads, all over the last two decades or so ([1])

  • Neat charts on how the strike zone expands on 3 ball counts and contracts on 2 strike counts ([1])

  • Cute SMBC comic on "What is the fastest animal?" ([1])

  • Great SMBC comic on job interviews ([1])

by Greg Linden ( at September 03, 2014 03:59 PM

August 31, 2014

Giles Bowkett

iOS 6 CSS Turns Futura Into Futura Condensed Extra Bold

If you're seeing this happen, no, you're not going crazy; I've seen it too, on both Chrome and Safari. (I haven't tested if it happens with other operating systems or browsers, although I may later.)

The threshold for this effect is font-weight: 500. At that weight, font-family: "Futura" will indeed produce Futura; at font-weight: 501 and above, font-family: "Futura" will actually produce Futura Condensed Extra Bold.

By the way, at any weight, font-family: "Futura Condensed Extra Bold" will produce Times New Roman. I'm not sure why; the charitable explanation is ignorance.

by Giles Bowkett ( at August 31, 2014 11:18 AM

August 27, 2014

Alarming Development

The Future Programming Manifesto

It’s time to reformulate the principles guiding my work.

[Revised definition of complexity in response to misunderstandings]

Inessential complexity is the root of all evil

Most of the problems of software arise because it is too complicated for humans to handle. We believe that much of this complexity is unnecessary and indeed self-inflicted. We seek to radically simplify software and programming.

Complexity is the total learning curve

We should measure complexity as the cumulative cognitive effort to learn a technology from novice all the way to expert. One simple surrogate measure is the size of the documentation. This approach conflicts with the common tendency to consider only the efficiency of experts. Expert efficiency is hopelessly confounded by training and selection biases, and often justifies making it harder to become an expert. We are skeptical of “expressive power” and “terseness”, which are often code words for making things more mathematical and abstract. Abstraction is a two-edged sword.

Our institutions, culture, and psychology all foster complexity

  • Maintaining compatibility increases complexity.
  • Technical debt increases complexity.
  • Most R&D is incremental: it adds features and tools and layers. Simplification requires that we throw things away.
  • Computer Science rejects simplification as a result because it is subjective.
  • The Curse of Knowledge: experts are blind to the complexity they have laboriously mastered.
  • Rewarding programmers for their ability to handle complexity selects for those who love it.
  • Our gold-rush economy encourages greed and haste.

To make progress we must rebel against these vested interests and bad habits. There will be strong resistance.

Think outside the box

Much complexity arises from how we have partitioned software into boxes: OS, PL, DB, UI, networking; and likewise how we have partitioned software development: edit, version, build, test, deploy. We should go back to the beginning and rethink everything in order to unify and simplify. To do this we must unlearn to program, a very hard thing to do.

Programming for the people

Revolutions start in the slums. Most new software platforms were initially dismissed by the experts as toys. We should work for end-users disenfranchised by lack of programming expertise. We should concentrate on their modest but ubiquitous needs rather than the high-end specialized problems addressed by most R&D. We should take inspiration from end-user tools like spreadsheets and HyperCard. We should avoid the trap of designing for ourselves. We believe that in the long run expert programmers also stand to greatly benefit from radical simplification, but to get there we must start small.

Simplicity first; performance last

Performance is often the first excuse for rejecting new ideas. We even do it to ourselves. We must break our own habit of designing for performance: it is seductively objective and quantifiable whereas the most important design issues are messily subjective and qualitative. After all, performance optimization is one thing we have mastered. Build compelling simplicity and performance will come.

Disciplined design evaluation

Computer Science has decided that, being a Science, it must rigorously evaluate results with empirical experiments or mathematical proofs. We are not doing Science. We are doing Design: using experience and judgement to make complex tradeoffs in order to satisfy qualitative human needs. Yet we still need a disciplined way to evaluate progress. Perhaps we can learn from the methodologies of other fields like Architecture and Industrial Design. This is a meta-problem we must address.

by Jonathan Edwards at August 27, 2014 06:19 PM

Programming with Managed Time

Final version of the paper is up, and an essay with embedded videos is here. Sean graciously invited me to coauthor but the ideas are really his – I just helped spin them.

We think there is great promise in abstracting away from the computer model of time. There is a large design space that is still largely unexplored. I will be presenting my own new approach for the first time in public at the FPW workshop at Strange Loop. We are hoping to excite other researchers to take up this challenge and develop their own approaches. Come talk with us at SPLASH or drop us a line.

by Jonathan Edwards at August 27, 2014 02:30 PM

August 25, 2014

Blue Sky on Mars

The Video of the Talk on Talks

While building my talk about talks, I also made a video that talks about building that talk about talks.

It's half an hour long — about as long as the talk itself — and gives you a different viewpoint of how a talk is made, from planning it all out, designing the slides, and delivering it to an audience. I'll even tell you all about the embarrassing things that happened during the talk itself.

August 25, 2014 12:00 AM

August 05, 2014

Bret Victor

Seeing Spaces poster-comic

"Seeing Spaces", as a 5-foot wall-poster comic.

by Bret Victor at August 05, 2014 05:26 PM

Greg Linden

Quick links

What caught my attention lately:
  • Great idea for walking directions: "At times, we do not [want] the fastest route ... When walking, we generally prefer tiny streets with trees over large avenues with cars ... [We] suggest routes that are not only short but also emotionally pleasant." ([1] [2] [3])

  • Cool idea for a drone that autonomously flies a small distance above and behind you while filming in HD ([1] [2])

  • "OkCupid doesn’t really know what it’s doing. Neither does any other website. It’s not like people have been building these things for very long, or you can go look up a blueprint or something. Most ideas are bad. Even good ideas could be better. Experiments are how you sort all this out." ([1] [2])

  • "Amazon’s cloud revenue now runs almost on par with VMware (VMW), which posted revenue of $5.2 billion last year" ([1])

  • Walmart is getting more aggressive about competing with Amazon on personalization and recommendations ([1])

  • It's important to realize that Amazon could have been a small bookstore on the Web ([1])

  • A lot of us thought the Amazon logo was phallic when it was introduced (worse, it was animated and actually grew from left-to-right). Remarkably, it's lived on for 14 years now. ([1])

  • A big problem with layoffs is not only do you lose some of the people you intended to layoff, but also some of your best employees will pick that time to leave. People with good options won't wait around to experience the chaos and fear; they'll just leave. ([1])

  • "A brand-name USB stick [claims to be] a computer keyboard [device] ... [and then] opens a command window on an attached computer and enters commands that cause it to download and install malicious software." ([1])

  • Financial services and poor computer security: "Our assumption was that, generally speaking, the financial sector had its act together much more" ([1] [2])

  • "NSA employees [were] passing around nude photos that were intercepted in the course of their daily work" ([1] [2])

  • Google Cloud googler says, "It should always be cheaper to run in the cloud no matter what your workload" but that the pricing isn't there yet ([1])

  • Details on Google's remarkably large and fast data warehouse ([1] [2])

  • Cool augmented reality game intended to be played as a passenger in a moving car that creates the terrain and enemies you see in the game based on the stores and buildings around you in the real world ([1])

  • "Astronomers of the 2020s will be swimming in petabytes of data streaming from space and the ground ... [such as] a 3,200-megapixel camera, which will produce an image of the entire sky every few days and over 10 years will produce a movie of the universe, swamping astronomers with data that will enable them to spot everything that moves or blinks in the heavens, including asteroids and supernova explosions." ([1])

  • Data are or data is: "'datum' isn't a word we ever use. So it makes no sense to use the plural when the singular doesn't exist." ([1])

  • The "If Google was a guy" series from CollegeHumor is hilarious (but probably NSFW) ([1] [2] [3])

  • Funny Dilbert comics on a Turing test for management ([1] [2])

  • Cathartic Xkcd comic on defending your thesis ([1])

by Greg Linden ( at August 05, 2014 09:29 AM

July 28, 2014

Blue Sky on Mars

The Easily Amused's Guide to Searching GitHub Issues

The GitHub UI is great if you want to see which issues are assigned to you, or for planning out your next milestone iteration, or if you're optimizing :zzz:zzzzzzzzzz god i'm already bored talking about all this work stuff. Let's find out how many open source projects have a label named ¯\_(ツ)_/¯:

Search results

Okay, there's only one. But there are two with a label of ಠ_ಠ. This opens up a world of opportunity for issue triage here, people.

Some ground rules

I'm going to run through a bunch of advanced search terms that GitHub lets you filter on. Two places to use these: the global search page, and the repository search page (here's Bootstrap's search page, for example).

Take special note of any filter syntax here that might help you in your day-to-day job, and then forget all about them and go eat a donut or something.


Make no mistake: programmers make mistakes. For example, use in:title to specifically search the title field and see all of the reverts of reverted reverts:

is:pr "revert revert revert" in:title

Still cedes the victor chair to those who have reverted reverts of reverted reverts, though:

is:pr "revert revert revert revert" in:title

Merged Pull Requests

Specifying is:pr and is:merged will filter by pull requests that have been merged.

Here's a few thousand pulls that blaze a trail:

is:merged is:pr "don't merge this"

Luckily there is one among us that realize programmers don't read anything and need a little more explicit encouragement.

is:pr "for the love of god don't merge this"

That pull was merged, too, btw.

Date spans

According to Google Trends, brogrammers really hit their stride in July 2011:

We should dig deeper, though, and figure out who were the bro trailblazers before brogrammers were a thing.

created:, updated:, merged:, and closed: all accept datestamps, either before or after scopes, with <YYYY-MM-DD and >YYYY-MM-DD, or ranges, like YYYY-MM-DD..YYYY-MM-DD. So, assuming the birth of the brogrammer term hit around July 2011, we can scientifically verify who bro'd it up before it was (un)cool. Basically bro hipsters. Or brosters.

created:<2011-07-01 bro


Drop a language: filter in your query and you can get instant ammunition for language flamewars.

For example, there's a half-dozen issues in Go repositories talking about metaprogramming:

language:go metaprogramming

Ruby developers, on the other hand, CAN"T GET ENOUGH METAPROGRAMMING in their 400 issues, which are all probably dynamically referencing each other, although it'd take a few hours to figure out exactly how they're doing that before we give up and just rewrite the goddamn thing in six small methods instead god why do I have to do this every single time oh right here's the syntax:

language:ruby metaprogramming

Also here are a bunch of eager beavers:

language:objective-c swift rewrite


Here's a list of folk who probably commented on a heavily-trafficked thread, likely posted in a funny meme photo, and then got real mad that they got millions of notifications from people doing the same thing afterwards:

comments:>50 unsubscribe thread

You can also narrow searches just to the comments with in:comments. Here's a bunch of open pulls that got the shipit squirrel — :shipit: — dropped in them:

in:comments is:open is:pr shipit


The mentions: filter lets you search through all issues and pull requests that mention a specific user.

Here, we can see the number of people complimenting Linus on his inclusive demeanor:

mentions:torvalds sweet demeanor

More reading

Would you like to know more? The full list of search filters are listed on the help site.

July 28, 2014 12:00 AM

July 21, 2014

Blue Sky on Mars

Keeping a Journal

My memory is horseshit.

My friend has a staggeringly impressive memory, effortlessly explaining in great detail about the first conversation we had together. I pat myself on the back if I'm able to remember which country it happened in.

But one of the best things I did last year was start writing a personal journal again.

Journals are the Google of the mind

Remember when you had to remember things? Well, probably not. But if you did remember, you'd recall not being able to simply Google facts from the dinner table.

Keeping a personal journal is like that. It's how I realize that my memory isn't completely faulty, it's just slow to make connections from time to time. It's startling how many entirely-forgotten events will vividly come back to me if I just have a starting point. Reading an old entry will transport me pretty close to that initial frame of mind.

What to write about

I started a journal in high school, but mostly stopped in college. It kills me that I didn't write during the first few years in the working world. It almost feels like I didn't even live those years.

I mean, what I wrote in high school was not Pulitzer Prize quality material:

June 16, 2004:
The movie "The Day After Tomorrow" somewhat frightens me. What would I do if there were an apocalyptic event? I feel like I should layer up. Maybe wear heavier coats.

Most of my entries were banal commentary on girls, depression, school and music, and were oddly passionate about things like how to correctly serve popcorn while volunteering at a local community event. The fierce passion about these events has greatly tapered off now — save for my continued interest in popcorn — but reading them today is interesting. It brings me back.

It's not just about writing "important" entries, either. Reading how I coped with my friend dying, the first few dates I went on, living through the Space Shuttle Columbia explosion... those are really fascinating to me to read now. The entries where I write in detail about an utterly boring, normal day are also fascinating. I think in some sense you're as defined by your mindless thoughts on a three hour car trip as much as you are by the trip you take halfway across the world.

How to journal

Journaling is different than writing for the public. Favor quantity over quality. Pick whatever gets you writing as much as possible. If that's a notebook every night by your bed, do that. If it's an app, use that.

For writing, I take a combination of approaches, depending on context. Most of my day-to-day goes into Day One. It's beautiful, and works on Mac, iPhone, and iPad. For other contexts, I've found a plaintext file in a directory works great.


Writing doesn't tickle your fancy? Take photos. I geotag every photo I take, and I try to take photos for me: little visual reminders of situations I've been in. Not everything needs a special filter so you get Maximum Instagram Likes™. Unremarkable photos of a brick wall might later remind me of the interesting conversation I had while leaning on it, for example.

Write for you

Like most things, if you do something regularly you're eventually going to become better at it. Writing's no different. What's more, the act of writing is kind of like defragging your mind: if I had a rough day dealing with someone, I've found that sitting down and writing about it helps organize my thoughts and emotions about what happened. I end up understanding the situation better and feel better prepared for dealing with it in the future.

Just remember that, above all, this is about you. You're the one living your life. If you don't keep track of it, what's the point?

This guy Socrates said: "the life which is unexamined is not worth living". I don't remember what he means by that, but I'll think about it later.

July 21, 2014 12:00 AM

July 15, 2014

Dan Bricklin

Essay about issues and techniques related to disconnected mobile apps

We're spending a lot of time at Alpha Software working on making it easier to create mobile business apps that support sometimes- or frequently-disconnected operation. As part of that work, I've learned a lot about how this issue is an impediment to widespread mobile app deployment in business and also about many of the areas that must be addressed in order to do it well. I've just posted an essay that covers a lot of what I've learned.

You can read "Dealing with Disconnected Operation in a Mobile Business Application: Issues and Techniques for Supporting Offline Usage" in the Writings section of my web site.

[Image in the original post on Dan Bricklin's Log with caption: The Offline Problem: Some common images of low connectivity]

July 15, 2014 07:22 PM

July 14, 2014

Avi Bryant

July 13, 2014



I just published a small library called ReactScriptLoader to make it easier to load external scripts with React. Feedback is appreciated!

by Yariv ( at July 13, 2014 06:38 PM

July 10, 2014

Greg Linden

More quick links

More of what caught my attention lately:
  • Crazy cool and the first time I've seen ultrasound used for device-to-device communication outside of research: "Chromecast will be able to pair without Wi-Fi, or even Bluetooth, via an unusual method: ultrasonic tones." ([1])

  • A 3D printer that can print in "any weldable material" including titanium, aluminum, and stainless steel ([1])

  • "You teach Baxter [an inexpensive industrial robot] how to do something by grabbing an arm and showing it what you want, sort of like how you would teach a child to paint" ([1])

  • When trying to use the wisdom of the crowds, you're better off using only the best part of the crowd. ([1])

  • "Americans now appear to trust internet news about as much as newspapers and television news ... not because confidence in internet news is rising, but because confidence in TV news and newspapers has plummeted over the years." ([1])

  • "Microsoft is basically 'done' with Windows 8.x. Regardless of how usable or functional it is or isn't, it has become Microsoft's Vista 2.0 -- something from which Microsoft needs to distance itself." ([1])

  • Google Flights now lets you see everywhere you can fly out of a city (including limiting to non-stops only) and how much it would cost ([1] [2] [3] [4])

  • "Entering the fulfillment center in Phoenix feels like venturing into a realm where the machines, not the humans, are in charge ... The place radiates a non-human intelligence, an overarching brain dictating the most minute movements of everyone within its reach." ([1])

  • Google's location history feature is both fascinating and frightening. If you own an Android device, go to location history, set it to 30 days, and see the detail on where you have been. While it's true that many have this kind of data, it may surprise you to see it all at once.

  • "Vodafone, one of the world's largest mobile phone groups, has revealed the existence of secret wires that allow government agencies to listen to all conversations on its networks, saying they are widely used in some of the 29 countries in which it operates in Europe and beyond." ([1])

  • Many "users actually do not attach any significant economic value to the security of their systems" ([1] [2])

  • "Ensuring that our patent system 'promotes the progress of science,' rather than impedes it, consistent with the constitutional mandate underlying our intellectual property system" ([1])

  • Smartphones may have hit the limit on how much improvements to screen resolution matter, meaning they will have to compete on other features (like sensors or voice recognition) ([1])

  • "Project Tango can see the world around it in 3D. This would allow developers to make augmented-reality apps that line up perfectly with the real world or make an app that can 3D scan an object or environment." ([1])

  • The selling point of smartwatches is paying $200 to not have to pull your phone out of your pocket, and that might be a tough sell. ([1])

  • "As programmers will tell you, the building part is often not the hardest part: It's figuring out what to build. 'Unless you can think about the ways computers can solve problems, you can't even know how to ask the questions that need to be answered'" ([1])

  • "[No] lectures, discussion sections, midterms ... a pre-test for each subject area ... given a mentor with a graduate degree in the field ... [and] textbooks, tutorials, and other resources. Eventually, they're assessed on how well they understand the concepts." ([1])

  • "A naked mole rat has never once been observed to develop cancer" ([1])

  • Hilarious Colbert Report on the Hachette mess, particularly loved the bit on "Customers who enjoyed this also bought this" at 3:00 in the video ([1])

  • Humor from The Onion: "We want $100 from you, so we’re just going to take it. As a cable subscriber, you really have no other option here" ([1])

  • Humor from the Borowitz Report: "It never would have occurred to me that an enormous corporation with the ability to track over half a billion customers would ever exploit that advantage in any way." ([1])

by Greg Linden ( at July 10, 2014 07:03 AM

June 28, 2014


Practical Blockchain Telegraphy.

Mircea Popescu writes:

‘Now making an irc channel is quite the pleasant experience : you create something out of nothing, get to name it and are now the boss of it. For a generation devoid of proper “empire building” avenues, this is about as cool as it gets. So you can do anything you wish, right ? Your channel, your rules, that’s the deal! … But all is perhaps not quite right in this world. Driven by a deep seated intuition that perhaps no, perhaps this isn’t the deal, perhaps the whole charade’s an illusion, the kids in question move compulsively to test it. So they dump child porn or stolen bank credentials or whatever it is that’s taboo in the larger society they fear they might have failed to individuate from. … So how did the story end ? Why, with the Freenode admin pointing out that no, you can’t ban Freenode admins from your Freenode channel. Because while it is “yours”, it is nevertheless… a Freenode channel. And so the adventure came to an end, the kids weren’t interested in wasting time with the rotten foundation of pretend-ownership, and pretend-control and pretend-alodial, and Freenode wasn’t interested in wasting time with some users that were inclined to verify the limits of “your” and “yours”.’

At this point, the tale is familiar – in one form or another – to nearly everyone with an Internet connection. But, as Mr. P points out, we are now reasonably well-equipped to change the story’s ending:

‘So yes, because Bitcoin now I can have, if I feel like it, an irc network that works exactly the way those kids’ didn’t, a decade ago. They had no choice but to go home and cry about it, about their failure, about their dashed dreams and hopes, yet guess what : I do.’

The exact mechanics of this hypothetical network were left as ‘an exercise for the alert reader.’ Let’s explore one possible solution to the exercise!

What follows is perhaps the most obvious conceivable recipe using the ingredients at hand. The ’server’ and each ‘client’ need only a standard copy of ‘bitcoind.’ The clients – at least initially – will each need a certain amount of Bitcoin.

First, the engineering envelope – the smallest and largest useful transaction for this blockchain abuse.

Let’s start with the lower bound. Bitcoin transactions containing outputs of less than 0.01 are discouraged by the network (miners demand extra ‘fee’ to incorporate such a transaction into a block.) Hence all outputs must exceed the ‘dust constant.’

As for the upper bound: the Bitcoin protocol gives us 8 bytes for a transaction amount (in units of satoshi, 1×10^-8 BTC.) However, we cannot use all eight bytes for payload – unless the channel is to be inhabited solely by royalty.

So let’s pick an ‘amplitude’ range: an amount of 0.01001 to 0.01256 BTC for each individual byte of the payload.

But now we are faced with the fact that a Bitcoin transaction will only be incorporated into the blockchain in a timely manner if a ‘miner’s fee’ (traditionally, 0.01 or so) is included. So we do not want one transaction per byte of payload. Hence, an engineering compromise is suggested:

The ’server’ creates a ‘channel’ by generating a certain number (let’s say 32, but this is not a critical constant) of Bitcoin addresses (public/private key pairs) - A1A2, … AN.

Server and client alike decode messages by examining the Bitcoin blockchain and parsing out amounts sent to these addresses. No use is made of ‘exotica’ – i.e. human-readable fields of the Bitcoin transaction, and other ‘garbage’ that might conceivably end up pruned from the blockchain in the future.

The only distinction between the server and the clients is that the server is the fellow who has the private keys to the address array. The one and only responsibility of the server is to re-send the coins back to the originators (selectively! see below.)

What ideal value of N to pick for AN? This is left as an exercise for the alert reader. Consider that miner’s fees increase with ‘mass’.

A client may speak on the channel by emitting a transaction of the following form:

bitcoind sendmany “” ‘{”A1″:0.01256,”A2″:0.01256,”A3″:0.01256}’

This transmits the string 0xFFFFFF. If a string longer than N is to be transmitted, the address index must simply loop around. So, if we wish to ’speak’ the ascii string ‘foobar’:

bitcoind sendmany “” ‘{”A1″:0.01103,”A2″:0.01112,”A3″:0.01112}’

bitcoind sendmany “” ‘{”A1″:0.01099,”A2″:0.01098,”A3″:0.01115}’

Two transactions. If we assume a miner’s fee of 0.01 BTC included with each, the total cost of transmission is 0.08639 BTC (about $50 USD in today’s exchange rate.) Yes, 19th century ‘Western Union’ telegraph looks like a bargain by comparison. But see below.

We can easily anticipate the objection: “No one would speak on this telegraph – they would soon go broke.”

The answer: the channel operator would return the coin to the originating address. Not immediately, of course (minimizing transaction fees.) And, naturally, not to everyone – merely those who are welcome guests in the channel. Unwelcome guests (spammers, and bozos of any and all other species) will find that they have stumbled into a ruinously expensive waste of time – because they will never see their coin again. (What to do with the ‘bozo coin’ is for the operator to decide. He can buy beer with it, pay for his bandwidth, or parcel it out to the welcome guests – whatever pleases him.)

Thus, we have an automatic moderation mechanism. Likewise, we get cryptographically-strong identity ‘for free’ – originating addresses of the transactions become user identities (they can be matched with human-readable names in some agreed-upon fashion, e.g. using a magic ‘hello’ packet.) We likewise get a ‘free’ perpetual log of the channel conversations.

If the system is run as a closed loop, the only ultimate cost to the operator and clients is the accumulated miner’s fees – which can be minimized by choosing a large N for the address array, and by returning the coins to their originators in infrequent (e.g. weekly) parcels.

The mechanics of this ‘chat’ apparatus will encourage brevity – and perhaps, clarity of thought. Or, alternatively, it could easily degenerate into a kind of mournful ‘twitter.’ Only one way to find out…

One obvious criticism – ’slow.’ Sure, we can wait eight minutes for a confirmed transaction. Or we can parse immediately. Depends on one’s taste.

by Stanislav at June 28, 2014 02:53 AM

June 27, 2014

Dan Bricklin

HTML5 First: Google innovators prototype in the browser

Two days ago, on Wednesday, I watched the Google I/O Keynote live stream. Early on in the two and a half hour mega-presentation they introduced their new "visual language" for UI design: Material Design. It has "materials" inspired by paper with shadows, bold colorful graphics, and "meaningful" motion/animation.

As a developer who makes heavy use of HTML5, what immediately struck me was this statement, made starting at about 20:20 into the video:

"Last year at I/O we announced Polymer, which is a powerful new UI library for the web.

Today, we're bringing you all of the Material Design capabilities to the web through Polymer. [Applause] As a web developer, you'll be able to build applications out of Material Design building blocks with all of the same surfaces, bold graphics, and smooth animations at 60 frames per second. [More applause, followed by the speaker smiling and ad-libbing: "That was good..."]

So, between the L preview and Polymer, you can bring the same, rich, fluid Material Design to every screen."

Wow, I thought, Google not only designed a mobile UI for their Java-driven devices, they went to the trouble of also then building it in HTML5 for web apps (mobile and desktop).

I was wrong. I did some looking at the documentation for Polymer, and in the FAQ I found this (emphasis added):

How is Polymer related to material design?

Polymer is the embodiment of material design for the web. The Polymer team works closely with the design teams behind material design. In fact, Polymer played a key role in material design's development: it was used to quickly prototype and iterate on design concepts. The material design components are still a work in progress, but will mature over the coming months.

So, the HTML5 version wasn't created after the native versions. It was the prototyping environment before the native code.

This is a great model to follow: Prototype, iterate, and even first ship, in HTML5. Once you know what you need, if necessary, take the time to do native code. This doesn't just apply to the old desktop (as it has for years). It also applies to today's polished, fluid mobile world.

As a developer who is closely connected to a system that produces HTML5, and that aids in rapid prototyping, I was delighted. Here are some of the leading-edge mobile developers, and they found that HTML5 has the power to do what they want, and do it quickly enough for the demands of the iterative design and testing that is so important in the mobile world. After hearing so many people claiming that "they hear" that HTML5 doesn't have the power for serious mobile applications, it was vindication to hear of people who actually build things choosing to go the HTML5 route, even when cost clearly wasn't the object, and succeeding.

To be fair, the Material Design developers did depend on the latest browser versions, which have built-in support for hardware-accelerated animation and compositing, and on a sophisticated JavaScript library to make coding easier. However, most mobile devices are now getting upgraded frequently to the latest browsers (Google announced better upgrading for Android in the same keynote and Apple brags about the large percentage of iOS users who run the latest version) and most mobile HTML5 developers and systems (the one I use included) make use of such libraries already.

I guess it's time for a new moble-related tag: #HTML5first.

June 27, 2014 06:24 PM

June 26, 2014

Axis of Eval

Attack of the Monadic Morons

[Deleted. To be replaced with a proper rant at another time.]

by Manuel Simoni ( at June 26, 2014 05:00 PM

June 22, 2014

How to Node

Solving Coding Challenges with Streams

My first experience using Node.js for a programming challenge was agonizing. I devised a viable solution, but I couldn’t figure out an effective way to parse the input. The format was painfully simple: text piped via stdin. No problem, right? I burned over half my time on what should have been a minor detail, and I ended up with a fragile, zip-tie and duct tape hack that still makes me shudder.

The experience inspired me to find an idiomatic approach to programming challenges. After working through more problems, I arrived at a pattern I hope others will find useful.

The pattern

The main idea is this: create a stream of problems, and transform each problem into a solution. The process consists of four steps:

  1. Break the input into a stream of lines.
  2. Transform these lines into problem-specific data structures.
  3. Solve the problems.
  4. Format the solutions for output.

For those familiar with streams, the pattern looks like this:

var split = require("split"); // dominictarr’s helpful line-splitting module

    .pipe(split()) // split input into lines
    .pipe(new ProblemStream()) // transform lines into problem data structures
    .pipe(new SolutionStream()) // solve each problem
    .pipe(new FormatStream()) // format the solutions for output
    .pipe(process.stdout); // write solution to stdout

Our problem

To keep this tutorial grounded, let's solve a Google Code Jam challenge. The problem asks us to verify solutions to sudoku puzzles. The input looks like this:

2                  // number of puzzles to verify
3                  // dimensions of first puzzle (3 * 3 = 9)
7 6 5 1 9 8 4 3 2  // first puzzle
8 1 9 2 4 3 5 7 6
3 2 4 6 5 7 9 8 1
1 9 8 4 3 2 7 6 5
2 4 3 5 7 6 8 1 9
6 5 7 9 8 1 3 2 4
4 3 2 7 6 5 1 9 8
5 7 6 8 1 9 2 4 3
9 8 1 3 2 4 6 5 7
3                  // dimensions of second puzzle
7 9 5 1 3 8 4 6 2  // second puzzle
2 1 3 5 4 6 8 7 9
6 8 4 9 2 7 4 5 1
1 3 8 4 6 2 7 9 5
5 4 6 8 7 9 2 1 3
9 2 7 3 5 1 6 8 4
4 6 2 7 9 5 1 3 8
8 7 9 2 1 3 5 4 6
3 5 1 6 8 4 9 2 7

The format of our output should be:

Case #1: Yes
Case #2: No

where "Yes" means that the puzzle has been solved correctly.

Let's get started.


Our first step is to retrieve the input from stdin. In Node, stdin is a readable stream. Essentially, a readable stream sends data as soon as that data can be read (for a more thorough explanation, check out the readable stream docs). The code below will echo whatever's written to stdin:


The pipe method takes all data from a readable stream and writes it to a writable stream.

It may not be evident from this code, but process.stdin pipes data in large chunks of bytes; we’re interested in lines of text. To break these chunks into lines, we can pipe process.stdin into dominictarr’s handy split module. npm install split, then:

var split = require("split");

process.stdin.setEncoding("utf8"); // convert bytes to utf8 characters


Creating problems with transform streams

Now that we have a sequence of lines, we're ready to begin the real work. We're going to transform these lines into a series of 2D arrays representing sudoku puzzles. Then, we'll pipe each puzzle into another stream that will check if it's solved.

Node core's transform streams provide exactly the abstraction we need. Unsurprisingly, a transform stream transforms data written to it and makes the result available as a readable stream. Confused? It'll become clearer as we continue.

To create a transform stream, inherit stream.Transform and invoke its constructor:

var Transform = require("stream").Transform;
var util = require("util");

util.inherits(ProblemStream, Transform); // inherit Transform

function ProblemStream () {, { "objectMode": true }); // invoke Transform's constructor

You'll notice we're passing the objectMode flag to the Transform constructor. Streams normally accept only strings and buffers. We'd like ours to output a 2D array, so we need to enable object mode.

Transform streams have two important methods _transform and _flush. _transform is invoked whenever data is written to the stream. We’ll use this to transform a sequence of lines into a sudoku puzzle. _flush is invoked when the transform stream has been notified that nothing else will be written to it. This function is helpful for completing any unfinished tasks.

Let's block in our transform function:

ProblemStream.prototype._transform = function (line, encoding, processed) {
     // TODO

_transform accepts three arguments. The first is the data written to the stream. In our case, it's a line of text. The second argument is the stream encoding, which we set to utf8. The final argument is a no argument callback used to signal that we've finished processing the input.

There are two important things to keep in mind when implementing your _transform function:

  1. Invoking the processed callback does not add anything to the output stream. It merely signals that we've finished processing the value passed to _transform.
  2. To output a value, use this.push(value).

With this in mind, let's return to the input.

7 6 5 1 9 8 4 3 2
8 1 9 2 4 3 5 7 6
3 2 4 6 5 7 9 8 1
1 9 8 4 3 2 7 6 5
2 4 3 5 7 6 8 1 9
6 5 7 9 8 1 3 2 4
4 3 2 7 6 5 1 9 8
5 7 6 8 1 9 2 4 3
9 8 1 3 2 4 6 5 7
7 9 5 1 3 8 4 6 2
2 1 3 5 4 6 8 7 9
6 8 4 9 2 7 4 5 1
1 3 8 4 6 2 7 9 5
5 4 6 8 7 9 2 1 3
9 2 7 3 5 1 6 8 4
4 6 2 7 9 5 1 3 8
8 7 9 2 1 3 5 4 6
3 5 1 6 8 4 9 2 7

We immediately encounter a problem: our _transform function is invoked once per line, but each of the first three lines has a different meaning. The first line describes the number of problems to solve, the second is how many lines constitute the next puzzle, and the next lines are the puzzle itself. Our stream needs to handle each of these lines differently.

Fortunately, we can store state in transform streams:

var Transform = require("stream").Transform;
var util = require("util");

util.inherits(ProblemStream, Transform);

function ProblemStream () {, { "objectMode": true });

    this.numProblemsToSolve = null;
    this.puzzleSize = null;
    this.currentPuzzle = null;

With these variables, we can track where we are in the sequence of lines.

ProblemStream.prototype._transform = function (line, encoding, processed) {
    if (this.numProblemsToSolve === null) { // handle first line
        this.numProblemsToSolve = +line;
    else if (this.puzzleSize === null) { // start a new puzzle
        this.puzzleSize = (+line) * (+line); // a size of 3 means the puzzle will be 9 lines long
        this.currentPuzzle = [];
    else {
        var numbers = line.match(/\d+/g); // break line into an array of numbers
        this.currentPuzzle.push(numbers); // add a new row to the puzzle
        this.puzzleSize--; // decrement number of remaining lines to parse for puzzle

        if (this.puzzleSize === 0) {
            this.push(this.currentPuzzle); // we've parsed the full puzzle; add it to the output stream
            this.puzzleSize = null; // reset; ready for next puzzle
    processed(); // we're done processing the current line

    .pipe(new ProblemStream())
    .pipe(new SolutionStream()) // TODO
    .pipe(new FormatStream()) // TODO

Take a moment to review the code. Remember, _transform is called for each line. The first line _transform receives corresponds to the number of problems to solve. Since numProblemsToSolve is null, that branch of the conditional will execute. The second line passed to _transform is the size of the puzzle. We use that to set up the array that will contain our sudoku puzzle. Now that we know the size of the puzzle, the third line passed to _transform starts the process of creating the data structure. Once the puzzle is built, we push the completed puzzle into the output end of the transform stream and prepare to create a new puzzle. This continues until we're out of lines to read.

Solve all the problems!

Having parsed and created our sudoku puzzle data structure, we can finally being solving the problem.

The task of "solving a problem" can be reformulated to "transforming a problem into a solution." That's exactly what our next stream will do.

As before, we'll inherit stream.Transform and enable object mode:

util.inherits(SolutionStream, Transform);

function SolutionStream () {, { "objectMode": true });

Then, we'll define a _transform method, which accepts a problem and produces a boolean:

SolutionStream.prototype._transform = function (problem, encoding, processed) {
    var solution = solve(problem);

    function solve (problem) {
        // TODO
        return false;

    .pipe(new ProblemStream())
    .pipe(new SolutionStream())
    .pipe(new FormatStream()) // TODO

Unlike the ProblemStream, this stream produces an output for each input; _transform executes once for every problem, and we need to solve every problem.

All we need to do is write a function that determines whether or not a sudoku problem is solved. I leave that to you.

Prettify the output

Now that we've solved the problem, our last step is to format the output. And, you guessed it, we'll use yet another transform stream.

Our FormatStream accepts a solution and transforms it into a string to pipe to process.stdout

Remember the output format?

Case #1: Yes
Case #2: No

We need to track the problem number and transform the boolean solution into "Yes" or "No."

util.inherits(FormatStream, Transform);
function FormatStream () {, { "objectMode": true });

    this.caseNumber = 0;

FormatStream.prototype._transform = function (solution, encoding, processed) {

    var result = solution ? "Yes" : "No";

    var formatted = "Case #" + this.caseNumber + ": " + result + "\n";


Now, connect the FormatStream to our pipeline, and we're done:

    .pipe(new ProblemStream())
    .pipe(new SolutionStream())
    .pipe(new FormatStream())

Check out the complete code on GitHub.

One final note

The biggest win of using pipe is that you can reuse your code with any readable and writable stream. If your problem source is over the network, as in the DEF CON qualifier, replace process.stdin and process.stdout with network streams, and everything should "just work."

You'll need to tune this approach slightly for each problem, but I hope it provides a good starting point.

by (Chad Wyszynski) at June 22, 2014 11:33 PM

June 18, 2014

Axis of Eval

Obsession with mathematics

To put it bluntly, the discipline of programming languages has yet to get over its childish passion for mathematics and for purely theoretical and often highly ideological speculation, at the expense of historical research and collaboration with the other social sciences. PL researchers are all too often preoccupied with petty mathematical problems of interest only to themselves. This obsession with mathematics is an easy way of acquiring the appearance of scientificity without having to answer the far more complex questions posed by the world we live in.
I've replaced "economics" with "programming languages" in this quote from Phil Greenspun's blog. Seems appropriate, no?

by Manuel Simoni ( at June 18, 2014 06:45 PM

June 11, 2014

Bret Victor

Seeing Spaces

What if we designed a new kind of "maker space" -- a space that isn't just for putting pieces together, but also for seeing and understanding a project's behavior in powerful ways?

  • seeing inside
  • seeing across time
  • seeing across possibilities

"I think people need to work in a space that moves them away from the kinds of non-scientific thinking that you do when you can't see what you're doing -- moves them away from blindly following recipes, from superstitions and rules of thumb -- and moves them towards deeply understanding what they're doing, inventing new things, discovering new things, contributing back to the global pool of human knowledge."

by Bret Victor at June 11, 2014 04:44 PM

June 10, 2014


Tungsten Will Melt in Your Mouth!

But, of course – it won’t.

But let’s imagine that it were in someone’s financial – hell, geopolitical! interest – to convince the public that it will. The New York Times editorial might go like this: ‘you may have heard of tungsten, a metal, just like gallium; the latter, a favourite among stage magicians for melting at body temperature…’

Now, it is possible that accounts of luscious, easily-melted tungsten have yet to be printed in your friendly local fishwrapper merely on account of there being no one who wishes to pay for communicating this ‘fact’ to us. But I can’t help but suppose that there are other forces at work here.

Consider, for instance, the equally-factual statements Bitcoin can be counterfeited at will and has no use value’ – or ‘being pwned is an inevitable fact of life’, or…

Somewhere between ‘tungsten melts in your mouth’ and the above ‘facts,’ there lies a kind of boundary. A line which professional liars cross at their peril. If there is an accepted, traditional term for this, I should like to learn it.

by Stanislav at June 10, 2014 01:23 AM

Bret Victor

June 05, 2014

Semantic Programming

Making reference to semiotics

For various professional and personal reasons progress on Semprola has been rather slow this year. But I have had quite a productive spurt of work in the last month as I have been noticing the connections between my work so far and some of the ideas in semiotics. This came about as I was looking for a replacement for the term 'reference' that I had been using to refer to a key type of data-structure that refers to other data-structure(s) in a possibly non-trivial way. My use of the term 'reference' is at odds with the way that philosophers use it (e.g. the sense/reference distinction) and I was increasingly unhappy with the way that I was using it. In comparison the semiotic term 'signifier' gives a much better indication of what these data-structures are for and how they are being used.

In particular I had been looking at the issues that arise as various different audiences use these signifiers (e.g. the author of the program, the computer agent running the program and then the user using the program). Semiotics seems to have an appropriate lexicon with which to talk about the way that the same signifying data-structure can have different meanings in different contexts and potentially change meaning over time.

I'm in the process of reading two books on semiotics at the moment the first of which is a general introductory book on “Semiotics: The Basics“ and the second is a PhD thesis where someone has already looked at the "Semotics of Programming". Once I've completed this reading I hope to finally finalise the terminology that I'm using and get back to completing the syntax of Semprola.

And, while to some this may seem like a classically unnecessary detour into something interesting but not essential to the project, for me this choice of core terminology is absolutely crucial as it will inform how I develop some of the context-aware features of Semprola (one of the last major conceptual things on my to-do list) and it will also colour everything I write about Semprola (although I know that most users will not be [initially] interested in this link with semiotics so it will certainly not be in the foreground!). 

A more minor interest in getting the terminology right (and finalised) is that I already have some re-organising of the underlying VM code that I need to do and when I do this I want to strip out the pervasive use of the term 'reference' and replace it with the 'right' term that I am more confident I will stick with. And I'm now fairly certain that 'signifier' is the right term. 

by (Oli) at June 05, 2014 03:59 PM

Greg Linden

Quick links

What caught my attention lately:
  • Fun data: "How to tell someone's age when all you know is her name" ([1])

  • "The possibility of proper tricorder technology in the future, scanning a bit of someone's blood and telling you if they have any diseases or anomalous genetic conditions" ([1])

  • Will self-driving vehicles appear first in trucking? ([1])

  • "Apple's moves into the world of fashion and wearable computing" ([1])

  • "Few people try to or want to use tablets like laptops" ([1] [2])

  • "While managers do indeed add value to a company, there’s no particular reason to believe that they add more value to a company than the people who report to them ... [You want] an organization where fairly-compensated people work together as a team, rather than trying to work out the best way to make money for themselves at the expense of their colleagues." ([1])

  • "Each meeting ... spawns even more meetings ... The solution ... reduce default meeting length from 60 to 30 minutes ... limit meetings to seven or fewer participants ... agendas with clear objectives ... materials ... distributed in advance .. on-time start ... early ending, especially if the meeting is going nowhere ... remove ... unnecessary supervisors." ([1])

  • Fun article on the history of the modern office: "The cubicle was actually intended to be this liberating design, and it basically became perverted" ([1])

  • "We were wrong about the first-time shoppers. They did mind registering. They resented having to register when they encountered the page. As one shopper told us, 'I'm not here to enter into a relationship. I just want to buy something.'" ([1])

  • Private investment in broadband infrastructure is actually dropping in the US ([1])

  • "Not only are packets being dropped, but all those not being dropped are also subject to delay. ... They are deliberately harming the service they deliver to their paying customers ... Shouldn't a broadband consumer network with near monopoly control over their customers be expected, if not obligated, to deliver a better experience than this?" ([1])

  • Fascinating data on cancer shows a surprising lack of linear relationship between aging and cancer ([1] [2])

  • "A wayward spacecraft ISEE-3/ICE was returning to fly past Earth after many decades of wandering through space. It was still operational, and could potentially be sent on a new mission, but NASA no longer had the equipment to talk to it ... crowdfunding project ... commandeer the spacecraft ... awfully long shot ... They are now in command of the ISEE-3 spacecraft." ([1])

  • I love the caption on this comic: "Somebody please do this and post it on YouTube so I can live vicariously through your awesomeness." ([1])

  • Hilarious SMBC comic on privacy and technology ([1])

  • Great SMBC comic: "Wanna play the Bayesian drinking game?" ([1])

  • Hilarious John Oliver segment on net neutrality ([1]) directs people to FCC website to comment, crashing FCC website ([2])

  • Very funny, from The Onion: "New Facebook Feature Scans Profile To Pinpoint Exactly When Things Went Wrong" ([1])

by Greg Linden ( at June 05, 2014 12:50 PM

June 04, 2014

Greg Linden

Project Euler and blending math and computer science for education

Project Euler is a simple and surprisingly good educational tool for a blend of computer science and math. Highly recommended.

You are given a problem (good examples: [1] [2] [3] [4]), go off and work on it in whatever programming language you like using whatever tools you like, and submit your answer (multiple submissions allowed). Simple, but surprisingly fun and interesting.

It's been around for a while (since 2006), and, though I've looked at it a few times, I only recently got addicted to it. It's not, as I first thought, just a series of interview-style coding questions, but a much more interesting set of deeper challenges in math that require programming to explore and solve. It's a great way to refresh on math and fun too.

Honestly, I can't say enough good things about it. I've blown hundreds of hours on some addictive video game before, addicted to the point that it occasionally interferes with work and sleep even, and this has the same feel. It's a great little educational tool and fun as well.

Definitely worth a look. Seems like it'd work for older teenagers too if you're looking for a summer project for a teen that already has some programming skill.

by Greg Linden ( at June 04, 2014 07:31 AM

June 03, 2014


Announcing HDMNode: a Node.JS based HDM Bitcoin wallet

I just released an open source a library called HDMNode. It’s a Node.JS based API server and client for hosted HDM (Hierarchical-Deterministic Multisig) Bitcoin wallets. If you’re interested in using it or contributing to it, please read on!

The goal of HDMNode is to make it easier for developers to build HDM wallets. Such wallets have significant security and privacy advantages over most popular wallet types and Bitcoin users (and the ecosystem) would benefit from a greater availability of high quality HDM wallet products. I believe there’s a dearth of HDM wallets because they’re fairly complicated to build. I hope that HDMNode will reduce the effort as well as provide developers a well audited and secure codebase on which they could safely rely. It would be a shame if every developer who wanted to build such a wallet had to face the same design issues and security pitfalls.

Why did I make HDMNode and why am I releasing it now? When I started working on this project I intended to build a complete HDM wallet product. However, as time went by I realized I had bit off a bit more than I could chew in the timeframe I had allotted to the project. While I made a lot of progress on the backend and the API client there was still a good amount of work to be done to make it production ready. In addition, the code needed many more eyeballs on it before I could be confident in its security. I didn’t want to risk shipping a product that holds people’s money with glaring flaws that I failed to catch. So, I decided that the best course of action was to open source the code and give other developers an opportunity to inspect the code, to use it in their own products, and to hopefully contribute back to the project.

While I expect HDMNode to be mostly used by hosted wallet providers, if HDMNode evolves into a complete open source wallet (with UI) it could give users who want to protect their privacy the option to host their wallet on their own servers. I don’t expect this to be the primary use case but I also don’t think that users must be forced to choose between security and privacy. With HDMNode, they could have both.

If HDMNode gains traction, I hope that its JSON-RPC based API will be standardized, allowing users to mix and match clients and servers that they trust and want to use.

What’s makes HDM wallets so great, anyway? HDM wallets’ strong security comes from their reliance on P2SH multisig addresses (as defined in BIP11 and BIP16). Such wallets store coins in addresses that are guarded by multiple private keys, each of which is generated on different machine. The typical setup is 2-of-3, where the client, the server, and a backup machine each have a key, and at least 2 keys are required to sign off on every transaction. This is far more robust than non multisig wallets, where the machine that holds the key that protects the coins becomes a single point of failure from a security perspective: if that machine gets compromised, the coins are gone.

HDM wallets also offer much better privacy than no-HD multisig wallets. Such wallets rely on a fixed set or subset of keys to generate P2SH addresses. Anyone observing the blockchain could link those addresses to each other (at least after their coins are spent) and can therefore derive the user’s balance and transaction history. HDM wallets don’t have this weakness because they can generate an arbitrary number of addresses, each made from a unique set of keys, from a single randomly generated seed, as defined in BIP32. Without knowing the wallet’s seed, it’s impossible to associate those addresses with each other.

HDM wallets have a couple of additional benefits shared with their non-multisig HD counterparts. They make it easy for users to back up their wallet once by backing up the wallet’s seed and restore it fully at a later point regardless of the number of transactions the user has performed. This is possible because having the wallet’s seed allows you to scan the blockchain and find all the transactions that sent or received coins from or to addresses that can be derived from that seed. HDM wallets also allow users to set up a hierarchical tree of sub-wallets, where having the parent wallet’s keys allows you to derive the child wallet’s keys but not vice versa. This feature can be useful for organizations or groups who want to give some members limited ability to spend the organization’s coins or observe incoming transactions to other branches of the tree.

HDM wallets have real benefits, but, as you might have guessed, they’re not perfect. Besides their implementation complexity, the main downside of HDM wallets is the initial added friction when creating the wallet, at least compared to pure hosted wallets. Users have to pick a strong password and remember it (no password recovery!). If they forget their password or fail to properly back up their keys, they can lose their coins. Also, to get the full security benefits of HDM wallets users should also set up the backup key pair on a separate machine (ideally an offline one). If a user doesn’t do that, and her machine is compromised at wallet creation time, a hacker could steal her coins once they’re deposited into the wallet. Despite this weakness (which users can avoid without too much effort) HDM wallets still much secure than client side wallets that expose the keys that guard the coins every time the user transacts. Such wallets expose the keys every time the user sends money, making their vulnerability window much bigger.

HDMNode is currently designed for supporting 2-of-3 multisig wallets, with one key on the server, one key on the client, and one key in backup (ideally offline), which I expect to be the most popular option for HDM wallets. This setup combines the best security features of wallets that store private keys client side and hosted wallets that store private keys on the server. In HDMNode, the coins are safe whether the client or the server gets hacked (but not both). If the server disappears or becomes inaccessible, the user can recover her coins using the backup (offline) key. An attacker must compromise at least 2 of these different systems to steal the user’s coins. While the server can’t steal the coins, it can act as a security service for the client by enforcing 2 factor auth and by refusing to sign off on transactions that seem suspicious or that are against user-defined rules such as daily spend limits. This protects the user against attacks where the attacker gains control over the user’s device and tries to steal the user’s coins by sending spend requests to the server.

If you’re sold on HDM wallets and you want to build one, I hope you use HDMNode. I’ll be happy to take contributions from anyone who wants to make Bitcoin wallets more secure and trusted!

by Yariv ( at June 03, 2014 10:29 AM

Bitcoin's money supply and security

(This was originally posted at

It’s evident that Bitcoin has been designed to reward hoarding by its early investors. It’s encoded in the protocol that the supply of new bitcoins will gradually diminish, halving every four years until the last bitcoin will be mined in 2140. In its first four years, each Bitcoin block rewarded the miner with 50 btc. Today, the reward is 25 btc; in 3-4 years, it will be 12.5 btc, and so on.

Furthermore, mining bitcoins used to be much more accessible. In the early days of the network the hash rate (the global hashing power dedicated to mining bitcoins) was much lower. Since less hashing power was competing for the discovery of new bitcoins it used to be possible to obtain a large number of new bitcoins by mining with commodity hardware. Today, however, it’s difficult to mine profitably without custom-built, bleeding edge ASICs and cheap electricity. The combination of cheap price, high block rewards, and low competition in mining has allowed the earliest bitcoin buyers and miners to accumulate large stakes of the currency.

The scheme seems to have worked as planned. Bitcoin’s price rose from ~$14 in early 2013 to more than $1,200/btc near the end of last year (it’s now hovering around $700), yielding fantastic returns to those who accumulated a large stake of bitcoins early in the game.

The sharp rise in bitcoin’s price has led many people to call Bitcoin a bubble or Ponzi scheme. I believe that the Ponzi scheme accusation is misplaced because there’s no apparent intent to enrich early investors at the expense of late adopters or to cause late adopters any losses, which is characteristic of Ponzi schemes. In addition, while Bitcoin’s price has been undeniably volatile, whether it’s bubble or not remains to be seen. If people see Bitcoin as a reliable store of value, like gold, it’s quite possible its price will increase over time. Of course, it’s also conceivable that its price will plummet if, say, someone invents an alternative crypto currency that’s better than Bitcoin in every way and users adopt it in droves.

Whether Bitcoin’s skewed wealth distribution is fair or not is worth debating. I think that there are merits to both sides of the argument. Early investors should be rewarded for taking a risk, but if Bitcoin keeps appreciating it could be problematic that so much of its wealth should be concentrated in the hands of a few people. What is clear to me, however, is that if Bitcoin hadn’t rewarded hoarding by early investors, Bitcoin would have been much more vulnerable. In fact, it may not have been a viable new cryptocurrency at all.

The reason is that the hoarding behavior indirectly gives miners a much needed incentive to secure the network in its early days. Bitcoin can only be secure when a large amount of mining power is spent protecting the network from a 51% attack (an attack where a single entity controls 51% of the mining power and is then able to essentially rewrite the history of the blockchain and thereby launch double spend attacks). For a completely decentralized system designed for moving money, Bitcoin’s security so far has been remarkably strong. It’s unclear exactly how much it would cost to launch a 51% attack against bitcoin but I’ve heard estimates from a few hundred million to over a billion dollars. Regardless of what the actual number is, the aggregate amount of mining power that’s dedicated to securing the network is very high and that gaining control over 51% of it to launch such attack would be very expensive.

Miners aren’t volunteering their computers to the network altruistically. They have a dual incentive to mine: every time they mine a new block they earn new coins as well as transaction fees from anyone whose transaction is contained in the block. Five years into Bitcoin’s creation the transaction fees that miners make are still quite small of the amount miners earn in “block rewards” through minting new coins (only 0.29% according to

Earnings from any given block are determined by the total market cap for Bitcoin at the moment in time when the block is mined and the basic forces of supply and demand are what determine the market cap. The demand for bitcoins is driven by new investors as well as by users who want to acquire bitcoins in order to send them to other people or to buy things. The supply is driven by miners who use the newly minted coins to pay for their operations as well as by investors who sell their coins.

If Bitcoin’s incentives were inverted and investors were encouraged to sell their coins rather than hoard them, the supply of bitcoins would increase and the price would drop. Miners would lose a proportional incentive to contribute to the network the computational power that’s needed to secure it.

When early investors hoard their coins they’re propping up the price, which increases Bitcoin’s market cap and attracts more mining power to protect the blockchain. This serves the function of priming the network’s mining power in its early days before enough transactions go through the network to provide large numbers of miners with sufficient transaction fees to continue mining profitably. Because the minting of new coins dilutes the ownership stake of early investors, early investors who hoard their bitcoins are arguably paying (indirectly) for bitcoin’s security ahead of Bitcoin’s readiness to become used as a currency by large numbers of people. In fact, according to, the current price per transaction (measured by dividing miner revenues by number of transactions) is $34.68, most of which is paid for almost exclusively by the block reward, i.e., existing bitcoin holders.

The current high price per transaction seems unsustainably high — and it is. To ultimately succeed, bitcoin must see growing transaction volumes and a gradual inversion of the relationship between transaction fees and block rewards. Barring these trends, it’s likely that mining revenue will decline, causing miners to eventually drop out of the network. If a large portion of miners do so the network’s security will be at risk, causing investors to flee, the price to drop, block rewards to depreciate, more miners to leave, etc, in a downward spiral.

While this dystopian scenario is possible, it’s unlikely in the near term: Bitcoin is only five years old and, by design, miners will continue being rewarded with new bitcoins for over a century. Until then, Bitcoin has plenty of time to gain popularity as a transactional currency. This will require much infrastructure to be built, from exchanges to wallets to payment providers, but much of the work is already under way. It will also require greater acceptance by merchants and users.

Thinking about this makes me appreciate the cleverness of Bitcoin’s design. On the surface, it offers a novel solution to the problem of distributed consensus (aka the Byzantine General’s Problem). But beyond that, it’s also a very carefully orchestrated economic system that would have failed very easily—and quickly—if its creator(s) hadn’t so deliberately aligned the incentives of investors, miners and users, seemingly with an eye towards maximizing the network’s security throughout its stages of adoption.

Such conceptual cleverness, however, cannot guarantee lasting success in the real world, where it remains to be seen whether Bitcoin can withstand market, regulatory, and competitive forces. It’s conceivable that someone will invent an altcoin that’s even better optimized to rewarding miners (for example, an altcoin with a perpetual inflation rate that’s high enough to give miners significant additional revenues but low enough to not scare away investors). If this happens, and this altcoin one day surpasses Bitcoin in mining power, will Bitcoin remain relevant? It’s hard to say, so grab some popcorn and enjoy the ride.

(Full disclosure — I own some bitcoins, which I bought in late 2013.)

by Yariv ( at June 03, 2014 09:37 AM

How to secure your bitcoins

(This was originally posted a few months ago at

In the past few months, I’ve spent a good amount of time investigating different solutions for Bitcoin storage. I’m writing this to share the knowledge I’ve gained and to help you make informed choices about securing your bitcoins. I won’t cover all of the products in this space — that would require a much longer post — just the ones that I think are relevant to the average user.

Before I go into the details I want to emphasize that great solutions to this problem don’t exist. Every solution involves different security/usability tradeoffs. The more usable ones put your coins at greater risk of theft and the more secure put your coins at greater risk of loss — it’s indeed possible to store your coins so securely that you’ll end up securing them from yourself. With this in mind, I’ll walk you through the options that I think strike the right balance for most people. The only condition for my advice is that if you follow it and you end up losing your coins you won’t blame me for it!

If you have a small amount of coins, if you need them available online for day-to-day spending, or if you want a user friendly option use Coinbase or They’ve both been around for a few years — an eternity in Bitcoin terms. They both have mobile apps, at least for Android, but iOS users are currently out of luck (sadly, Apple has removed all Bitcoin wallets from the App Store). They allow you to easily access your account from multiple machines. They provide two factor auth, an important requirement for online wallet security. Coinbase also has daily spend limits after which a second two factor auth check kicks in, which is a nice security feature.

The main difference from a security perspective between Coinbase and is that Coinbase holds the private keys for your bitcoins on their servers whereas encrypts them on the client with your password and only stores the encrypted keys on the server.

As a consequence, if you use Coinbase and Coinbase gets hacked or disappears tomorrow due to some calamity you will lose your coins.

This may sound alarming, but I believe the probably this would happen is low. Coinbase’s team is competent, they follow strong security practices, and they’re backed by some of the best VCs in the industry. However, the risk of loss for Coinbase does exist, so I wouldn’t recommend putting in Coinbase a significant chunk of your life savings.

(Remember: no Bitcoin wallet has the equivalent of FDIC insurance like a bank account. Once the coins are gone, they’re gone.)

Using protects you from their disappearing or getting hacked. If you keep a wallet backup you can decrypt your private keys locally as described here. However, you can still lose your coins if your phone or computer gets hacked or if an attacker gets his or her hands on your encrypted wallet and you’ve chosen a weak password. This would let the attacker brute force your private keys and steal your coins.

(When using Bitcoin wallets always choose a secure password. It should be long, it should contain a combination of letters, numbers and symbols, and it should be unique. Also, never reuse passwords because it could seriously compromise their security.)

If you do use you should avoid using their web based wallet. Only use their native apps or browser extensions (preferably the Chrome app), which you can download at This is because Javascript cryptography in the browser is inherently non secure. In fact, you should never trust any wallet that is web based.

Open source is another important consideration. has open sourced all of their client wallet code, so its security could be vetted by experts. This is a baseline requirement for any wallet application that directly handles private keys.

Coinbase has also open sourced their Android and now delisted iOS app but this is less relevant because as opposed to their client apps don’t touch private keys. Nonetheless, Coinbase should be given credit for open sourcing their apps as it gives security experts the ability to at least rule out some possible attacks.

This covers the options that I consider user friendly yet still decently secure. Let’s move on to the options that would give you much greater control over the security of your coins at the cost of much greater complexity. As you’ve probably guessed, this involves putting them in cold storage.

The two main contenders in this arena are Armory and Electrum. They both let you generate your private keys on an offline machine and only transfer your public keys to an online machine, where they can receive bitcoins but but not send them. These clients are both deterministic, which implies that all their private keys could be generated from a single seed — a randomly generated 128 bit number or easily remembered passphrase— which makes them easy to back up. Being deterministic also also allows them to generate new receiving addresses from the seed’s public key. This is an important feature because in most cases you don’t want to reuse addresses when receiving bitcoins in order to protect the privacy of your wallet balance on the public blockchain.

The main difference between Armory and Electrum is that Armory downloads the full blockchain (Armory uses bitcoind as its backend) and Electrum uses a third party server to only receive information about the addresses it holds. This makes Armory slow, private, and secure and Electrum fast, less private and somewhat less secure. Electrum is less private because the remote server to which your client connects knows your IP address and which addresses your wallet requested. It’s also less secure because a malicious remote server could lie to the client about its bitcoin balance. However, this weakness doesn’t compromise your private keys, which is the primary concern for offline wallets.

Of course, both products are open source, which is a baseline security requirement for client side wallets.

MultiBit is another noteable option because, like Armory, it relies on the P2P network to query the state of the blockchain. MultiBit is also fast because it only queries the subset of the blockchain’s blocks that are relevant to the addresses in the wallet (this is called SPV mode). This makes MultiBit more user friendly than Armory at the cost of some security because an attacker that controls the internet connection could feed the client false information about the wallet’s balance, similar to a malicious Electrum server.

MultiBit is also more private than Electrum because rather than querying the addresses it cares about directly it queries them using a bloom filter that matches a superset of those addresses. However, I don’t know precisely how much privacy this would give you because presumably the remote nodes could infer which addresses the user owns within some confidence interval. Maybe someone who’s an expert in SPV mode could expand on this.

I don’t recommend MultiBit at the moment because it’s not deterministic, which makes it harder to back up. The developers announced that a future release will enable deterministic wallets, at which point MultiBit will become a strong contender for secure yet usable offline bitcoin storage.

The most important thing you have to remember before you embark on this cold storage security journey (and it is a serious journey, as you’ll soon see), is that if your private keys ever touch a machine that’s connected to the internet you should assume they’re compromised and that your coins will be stolen.

This is because, if you’re truly paranoid, you should know that computers fundamentally cannot be trusted. It’s impossible to know what really goes on within a computer. It could have viruses, rootkits, software vulnerabilities, and even compromised hardware. If that computer has access to your private keys and it can send them over the internet, whoever controls your computer can steal your bitcoins.

With this warning in mind, let’s walk through the steps you should take to set up secure cold storage. I’m going to describe the Electrum method because I’m more familiar with it but Armory should be similar.

1) Get an old computer you won’t use for anything else. Reformat it and and install Linux on it. I recommend either Debian or Ubuntu. Make sure this computer never connects to the internet.

(When you format this machine, you should ideally encrypt your partitions — including your swap parition — for extra security. Otherwise, your OS could inadvertently write your private keys unencrypted to disc while it swaps them out of memory, allowing an attacker that gets his hands on the machine to steal your private keys.)

2) Download Electrum onto your online machine and copy it to your offline machine from a USB drive.

(Note that even USB drives can carry viruses, so it’s recommended to use a new USB drive that hasn’t touched any other machines. However, those viruses are unlikely to infect Linux machines, so if you followed step 1 you should be fairly safe).

3) Verify the binary’s GPG signature before you install it (these MultiBit instructions should apply to Electrum users too). Follow the installation instructions to install Electrum on both machines.

4) On the offline machine, create a new wallet. Choose a strong password for encrypting this wallet on disk. Write the wallet’s seed on a piece of paper.

5) Copy the public key from the offline machine to the Electrum client running on your online machine. At this point, Electrum on the online machine should be able to generate public keys for receiving bitcoins but wouldn’t be able to spend them it wouldn’t have access to the private keys, which are safely store exclusively on the offline machine. (This process is described in more detail here.)

6) As a test, send a small amount of coins to the first few addresses generated by the wallet. Then create a new wallet on the online machine, enter the seed from the offline machine, and verify your wallet has been completely restored and that you can spend the funds. Send the coins back to the the online wallet from which you sent them to verify Electrum could send them successfully.

7) If everything worked as expected, repeat steps 4-5. You should repeat those steps because the moment you’ve entered your seed into the online machine you could have compromised it so you shouldn’t use it anymore to protect your offline keys.

(If you’re truly paranoid you should should use something like Diceware because there’s a chance your computer’s random number generator isn’t going to generate sufficent entropy. It sounds crazy, but this kind of bug has happened before with certain Android wallets.)

8) Send your remaining coins to the new wallet you created. Store the seed on paper in a secure place, or in multiple secure places as you see fit (e.g. in safes, bank vaults, or whereever else you feel safe).

9) For even stronger security of your paper backups, you should consider generating two factor paper backups for your wallet’s seeds by encrypting them with your password as described in BIP38. This would prevent anyone that gets your paper backups from stealing your coins if they don’t also know your password. The process for doing this will be left as an exercise to the reader.

If you’ve read this far, this is a good time to stop and ask yourself: are you sure you still want to own bitcoins? ☺

As you can see, Bitcoin security is quite complicated and hard to get right, even for people that have the understanding and the patience to put their coins in offline storage. In my opinion, this is one of the main reasons that Bitcoin isn’t quite ready for the mainstream.

That said, the future for Bitcoin security looks bright. New wallets that use multisig addresses to protect bitcoins behind multiple private keys are coming, and once they’re vetted by the experts I’ll update this post or write a new post with my latest recommendations. I’ve also started working on a project that I hope will improve security for Bitcoin users but I‘m not ready to make any promises or announcements about it. ☺

Got feedback? Join the conversation or follow me on Twitter.

by Yariv ( at June 03, 2014 09:37 AM

June 02, 2014


Unifying dynamic and static types with LFE

In my last post I described how to use LFE to overcome some of the weaknesses of parameterized modules. Unfortunately, all is not rosy yet in the land of LFE types. Parameterized modules allow you to create only static types. The compiler doesn't do static type checking, but you have to define the properties of your types at compile time. This works in many cases, but sometimes you want totally dynamic containers that map keys to values. In Erlang, this is typically done with dicts. We could still use them with LFE, but I don't like having different methods of accessing the properties of objects depending on whether their types were defined at run time or compile time.

Let's use macros to solve the problem.

In my last post, I relied on the build-in 'call' function to access the properties of static objects. Let's create a wrapper to 'call' that lets us access the properties of dicts in exactly the same manner as we do properties of other objects:

We can use 'dot' to get and set the properties of both dicts and static objects:

> (: dog test)
(lola rocky)

I think this is kinda of cool, though to be honest I'm not entirely sure it's a great idea to obfuscate in the code whether we're dealing with dicts or static objects.

by Yariv ( at June 02, 2014 06:01 PM


Knowing It and Seeing It

From Alan Kay, Powerful Ideas Need Love To!,

Let me start the conversation by showing a video made by the National Science Foundation at a recent Harvard commencement, in which they asked some of the graduating seniors and their professors a few simple questions about what causes the seasons and the phases of the moon. All were confident about their answers, but roughly 95% gave explanations that were not even close to what science has discovered. Their main theories were that the seasons are caused by the Earth being closer to the sun in summer, and that the phases of the moon are caused by the Earth’s shadow. Some of the graduates had taken quite a bit of science in high school and at Harvard. NSF used this to open a discussion about why science isn’t learned well even after years of schooling. And not learned well even by most of the successful students, with high SATs, at the best universities, with complete access to computers, networks, and information.

My reaction was a little different. I kept waiting for the “other questions” that NSF should have asked, but they never did. I got my chance a few weeks later after giving a talk at UCLA. I asked some of the seniors, first year graduate students, and a few professors the same questions about the seasons and the phases of the moon and got very similar results: about 95% gave bogus explanations along the same lines as the Harvard students and professors. But now I got to ask the next questions.

To those that didn’t understand the seasons, I asked if they knew what season it was in South America and Australia when it is summer in North America. They all knew it was winter. To those that didn’t understand the phases of the moon, I asked if they had ever seen the moon and the sun in the sky at the same time. They all had. Slowly, and only in a few, I watched them struggle to realize that having opposite seasons in the different hemispheres could not possibly be compatible with their “closer to the sun for summer” theory, and that the sun and the moon in the sky together could not possibly be compatible with their “Earth blocks the suns rays” theory of the phases.

To me NSF quite missed the point. They thought they were turning up a “science problem,” but there are thousands of science “facts” and no scientist knows them all; we should be grateful that the Harvard and UCLA students didn’t “know the answers.” What actually turned up is a kind of “math problem,” a thinking and learning problem that is far more serious.

Why more serious? Because the UCLA students and professors (and their Harvard counterparts) knew something that contradicted the very theories they were trying to articulate and not one of them could get to that contradictory knowledge to say, “Hey, wait a minute…”! In some form, they “knew” about the opposite seasons and that they had seen the sun and the moon in the sky at the same time, but they did not “know” in any operational sense of being able to pull it out of their memories when thinking about related topics. Their “knowings” were isolated instead of set up to be colliding steadily with new ideas as they were formed and considered.

From Julian Jaynes, The Origin of Consciousness in the Breakdown of the Bicameral Mind (pg 389-91),

If a hypnotized person is told to walk across the room, and a chair has been placed in his path, and he is told that there is no chair there, he does not hallucinate the chair out of existence. He simply walks around it. He behaves as if he did not notice it — which of course he did since he walked around it. It is interesting here that if unhypnotized subjects are asked to simulate hypnosis in this particular situation, they promptly crash into the chair, since they are trying to be consistent with the erroneous view that hypnosis actually changes perceptions.

Hence the important concept of trance logic which has been brought forward to denote this difference. This is simply the bland response to absurd logical contradictions. But it is not any kind of logic really, nor simply a trance phenomenon. It is rather what I would prefer to dress up as paralogical compliance to verbally mediated reality. It is paralogic because the rules of logic (which we remember are an external standard of truth, not the way the mind works) are put aside to comply with assertions about reality that are not completely true…

It is paralogic compliance when a subject can accept that the same person is in two locations at the same time. If a hypnotized person is told that person X is person Y, he will behave accordingly. Then if the real person Y walks into the room, the subject finds it perfectly acceptable that both are person Y.

I can’t help but notice the spatial rather than merely logical contradictions in these examples: the seasons in the northern and southern hemispheres, seeing the moon and the sun simultaneously, walking around the nonexistent chair, a person in two places at once. We see one “area” of reality, and we see some other “area”, but we don’t see the two areas together, at once.

Unfortunately, we “learn” many facts the way the hypnotized patient “learns” that the chair does not exist or that person X is person Y. We’ll readily agree to the truth of the fact (proof by authority in this case), but we won’t see the fact in relation to the other ways we understand the world. We see many small pictures, but cannot change our perspective to see the pictures in relation to each other.

How can we combat our tendency to “deny” these contradictory perspectives?

First, we need to place our “knowledge” in an experiential setting, rather than a verbally-mediated setting. Schooling encourages us to file our knowledge via labels. We recall facts when prompted with “keywords”; the “key” unlocks the otherwise isolated knowledge. This mental organization encourages us to reason by verbally associating via the labels (”there’s too much crime, so we need more police”), rather than seeing a system of interrelated parts. You can challenge this tendency in your own thought by eliminating the verb “to be” from your vocabulary. This exercise unearths label-based reasoning.

Second, we need to improve our ability to see more at once. We can “enlarge” our mental space by challenging our imagination. Can you see how the tilt of the earth creates the seasons as it revolves around the sun? When you look at the moon, can you relate the earth, moon, and sun in your mind’s eye? (I have trouble with this one.) We also enlarge our mental space by externalizing it: drawing diagrams, making physical models, putting pictures on the wall. We get very powerful leverage by inventing new spatial representations. Our “picture” of the world changed with the invention of the map.

by Toby at June 02, 2014 01:28 AM

May 31, 2014

Bret Victor

May 29, 2014


Trust the principle

The cryptographic e-cash community was caught off-guard by BitCoin's adoption and success. One group of researchers from PARC and Standford took heed and contemplated what factors contributed to that success. Distribution and decentralization was one of the key features they identified as contributing to BitCoin's adoption and success. From the paper:
No central point of trust. Bitcoin has a completely distributed architecture, without any single trusted entity. Bitcoin assumes that the majority of nodes in its network are honest, and resorts to a majority vote mechanism for double spending avoidance, and dispute resolution. In contrast, most e-cash schemes require a centralized bank who is trusted for purposes of e-cash issuance, and double-spending detection. This greatly appeals to individuals who wish for a freely-traded currency not in control by any governments, banks, or authorities — from libertarians to drug-dealers and other underground economy proponents (note that apart from the aforementioned illegal usages, there are numerous legitimate uses as well, which will be mentioned later). In a spirit similar to the original motivation for a distributed Internet, such a purely decentralized system guarantees that no single entity, no matter how initially benevolent, can succumb to the temptation or be coerced by a government into subverting it for its own benefit.
One way to look at the relationship of trust between individual and network in a distributed and decentralized architecture is that you can trust the network if you trust the underlying, organizing principles of the network. This kind of trust works differently than trust placed on any single or central individual. 

There's a funny twist in this. While, on the one hand, it is possible to be physically present with an individual, to sit together with them, to look at them, hear them, and on the other, principles can seem to be much more abstract, much more removed, principles are often easier to grasp than individuals. As Gandhi's biographer, Louis Fischer, pointed out, the principle of 'an eye for an eye' can have the consequence of leaving everybody blind. Yet, as history shows, the principle of Satyagraha and non-violent, peaceful protest, central to Gandhi's efforts, and a substantial influence on Martin Luther King's plays out differently. We can see into the workings of principles in a way that we often can't with individuals. This is not to say that individuals and even institutions can't embody a quality or principle that we wish to bring into the network. However, we have to see the whole of an individual before we can tell if they do indeed embody that quality or principle. We don't always have that access or that perspective on an individual.

BitCoin's architecture weaves together principles that make the most of individuals more self-centered aspects such desire for personal gain and reticence to trust each other, and puts them in service of a higher aim, a trustworthy network. In short, it lets people be themselves, mostly honest, with some conceit, some deceit, and a need to connect to something greater. This asks for a qualitatively different relationship between the individual and the network: trust the principle that organizes the network by letting people be what they are.

by leithaus ( at May 29, 2014 04:07 PM

May 20, 2014

Dan Bricklin

A new app from Dan and Alpha: AlphaRef Reader

For various reasons I decided that I needed an app tuned to reading reference material. I've released a new app that demonstrates addressing this need to the Apple App Store and the Google Play Store: AlphaRef King James Version of the Bible. This is the text of the Bible presented in a new reading environment that I created along with my daughter Adina, a UX and graphics designer. It is implemented using Alpha Anywhere and delivered as an HTML5 app in a PhoneGap wrapper.

To read the whole story, and to see a video of the app in action, read "AlphaRef Reader: Tablet-first design of an app for reading reference material" in the Writings section of my web site.

May 20, 2014 04:29 PM

May 16, 2014

Blue Sky on Mars

The Talk on Talks

Most people are scared to present at a conference, or a meetup, or at a company meeting. Public speaking is a common fear, and many find themselves at night waking up from nightmares involving a disastrous speech. All too often, people think public speaking is something to be afraid of.


Now, that said, sometimes you're stuck. You can't very well turn down the speech at your best friend's wedding now, can you? Or maybe you can because secretly you don't like your friend as much as they think you do. You meanie.

The best public speakers are as nervous as you are. They've just done it before, and they all do certain things much better than you. But you can change that. It's an extremely learnable skill.

That's what this talk is all about.



Behind-the-scenes video of how this talk was created

While building this talk about talks, I also made a video that talks about building that talk about talks.

It's half an hour long — about as long as the talk itself — and gives you a different viewpoint of how a talk is made, from planning it all out, designing the slides, and delivering it to an audience. I'll even show you all the embarrassing things that happened to me during the talk itself.

Companion Site

I've published a gigantic mess of writing about public speaking up on

Give it a gander if you're into that.

May 16, 2014 12:00 AM

May 15, 2014

Brian Marick

How I messed up a medium-scale refactoring

Suggie is a back end Clojure app that is responsible for maintaining eight collections of stutters. The collections live in Redis and are consumed by two front-end apps. What goes in which collections, and how, is governed by a number of business rules. For example, one kind of new stutter produces two entries in one of the collections: the first being the new stutter and the second being an old one selected by a Lucene search.

The collections and business rules were added one by one. I wasn’t vigilant enough about keeping the code clean as I added them. At a point where I had a little bit of slack, I decided to spend up to two ideal days cleaning up the code (and adding one small new feature). I failed and ended up reverting about 70% of my changes.

What have I learned (or, mostly, relearned)?

Let’s start before I started:

  • It’s clear that I let the code get too messy before reacting. I should have made a smaller effort, earlier, to clean it up.

  • In general, I find that switching out of the “coding register” into the “explaining register” (talking vs. typing) helps me realize I’m going into the weeds. Because we’re a two-programmer shop, with only one of us (not me) competent at the front end, and we’re under time pressure (first big release of product 3, going for series A funding), I worked on Suggie too much without discussing my changes with Colin.

  • Relatedly, pairing would have helped. Unfortunately, Colin and I are of different editor religions - he’s vim, I’m emacs - and that has a surprisingly negative effect on pairing. We need to figure out how to do better.

As I did the refactoring, I’d say I had two major failures.


I read over the code carefully and made diagrams with circles and arrows and a paragraph on the back of each one explaining what each one was. That was useful. But what I under-thought was the trajectory of the refactorings: which ones should come first, which next, so as to provide the most “You’re going askew!” information soonest. (Alternately: get the most innocuous and obvious changes out of the way first, so that they wouldn’t distract/tempt me as I was doing the more challenging ones.)


I realized there were four design issues with this code.

  1. The terminology was out of date. (Bad names.)

  2. There was the oh-so-common problem that all the communication with Redis had gotten lumped into a single namespace (think “class”). The same code that put stutters into Redis hashes put ordered sequences of references-to-stutters into Redis sorted sets - and also put references-to-stutters into Redis plain sets. The code cried out to be separated into four different namespaces. Alone, that would have been a straightforward refactoring. But…

  3. But there was also the problem that the existing code was inefficient, in that it didn’t make good use of Redis’s pipelining. I want to be clear here: our initial move to Redis was motivated by real, measurable latency problems. And the switch to Redis was successful. But now that we were committed to Redis, I fooled myself in a particular way: “Efficiency’s good, all else being equal. We don’t know that we need pipelining here, but I see a pretty clear path toward just dropping it in during the refactoring that I’m doing anyway. So why not do it along the way?”

    Why not? Because, as it turned out, I’d have gone a lot faster if I’d first solved either problem 1 or 2 and then made the changes required to add pipelining. (That’s what I’m doing now.)

  4. Much of our Redis code is not atomic, which needs to be fixed. I decided I’d also fix that (for this app) at the same time I did everything else. As I write, that seems so obviously stupid that maybe I should find another profession. However, I convinced myself that this new refactoring would fall easily out of the pipeline refactoring (which would fall out of the rearrangement refactoring). In retrospect, I needed to think more carefully about atomicity without assuming that I really understood how it worked in Redis. But, again, I assumed I could learn that as I went.

So I mushed up many different things: renaming, moving code to the right place, introducing more pipelining, and keeping an eye out for atomicity. My brain proved too small to keep track of them. I should have sequenced them.

In addition to all that, I noticed some other things.

  • I would have done better to spend an hour a day over many days, rather than devoting full days to the refactoring. Because I have a compulsive personality, I must be forced to take time away from a problem to make me realize exactly how far down a rathole I’ve gone. (Alternately, I need a pair to reign me in.)

  • I kept all the tests passing, and I kept the system working, but I made a crucial mistake. There was a method called add-personal-X-plus-possible-Y. (The name alone is a clue that something’s gone wrong.) It was 16 lines of if-madness. Instead of modifying it (while keeping the tests passing and keeping the system working), I kept the system working by not changing it. I added a new function that was intended to be a drop-in replacement for it - come the glorious future when everything worked. So there was no connection between “system working” and “tests passing” while I was doing the replacement. The new function could have been completely broken, but the system would keep working, because the new function wasn’t used anywhere outside the tests.

    This seems to me a rookie mistake, a variant of “throw it away and rewrite it”. But somehow I allowed myself to gradually slip into that trap.

  • I suffered a bit from relative inexperience with the full panoply of immutable/functional programming styles. What I’d written was C-style imperative code. Transforming it into object-oriented code would have been straightforward, given my familiarity with various design patterns. Figuring out how to do the equivalent transformation idiomatically in Clojure, given all the constraints I’d placed on myself, took me too long. I only really figured out how to do it after I’d pulled the Eject lever.

Here’s something that’s interesting to me. I spent many years as an independent process consultant. In my spare time, I wrote code. Because that was a part-time thing, I had a lot of leisure to put the code aside and listen to that small, still voice telling me I was going astray.

Things are different now. This real world job has only strengthened my belief in what I preached as a consultant. In particular, I believe that teams must have the discipline to go slow to get fast. And yet: I keep going too fast. These days, it’s markedly harder for me to attend to the small, still voice.

It’s an interesting problem.

by Brian Marick at May 15, 2014 11:20 PM


splicious : facebook :: protunity : linkedin

People don't really know what's gone into the development of A year before we launched the splicious crowd funding campaign we launched This was a test case to drive out the technology. One way to understand the relationships of these networks is via those old SAT-style analogies:

splicious : facebook :: protunity : linkedin 

The underlying technology (barring lots of improvement between when we launched Protunity and now) is the same . Therefore, the splicious campaign is not like other campaigns where what they have is a vision, but no actual solution. We have a solution that's already at a certain level of maturity.

We wanted to wait to bring this offering to market until we had at least that level of maturity. Additionally, we wanted to have an economic model that made sense, one that grows a self-sustaining network, free of ads and free of other corporate sponsorship. That's where the emerging cryptocurrency movement suddenly came into play. With BitCoin and other cryptocurrencies we can provide a distributed network that is self-sustaining.

Really, if you look at the numbers, in terms of capital investment, and person-years, as well as technological innovation, we've provided 99.999% of the solution. We're looking to see if there's enough interest for people to help us with 0.001%. That's what we need to roll out the splicious network.

We're confident that with even that level of engagement we can grow the network to a global alternative to the existing social messaging platforms. One that balances privacy and trust, one that values creativity and engagement, one that puts you back in charge of what you see of your Internet-communications, and who sees what you share.

by leithaus ( at May 15, 2014 11:14 AM

May 14, 2014


Imagine all the people...

Humans are social. We have a built-in urge to share observations, insights, thoughts and feelings. We have a need to feel received. The social media with their ubiquity, easy access, and ease of use meet this need relatively well. The simple feedback mechanism, clicking a “like” or “+1” button can be a powerful expression of sentiment and connection.

But, are you concerned about who can see communications meant for your friends, family, or colleagues, from your email to your FB posts?
Are you concerned that your creative output and personally identifiable information are bought and sold by social media companies, advertisers, and whoever else they sell it on to?
Is your feed overrun with ads and TMI that you can’t organize? Or worse, organized by someone else’s algorithm?

Imagine a world where social media served you. Period.

What if the online outlets for your creativity, your music, political commentary, stories, recipes and designs were also outlets for earning money?
What if they could allow you to support yourself and your friends, monetarily?
What if they could make it easy for you to play a more active role in deciding what you see and when, and who sees what you share?

Splicious is a social communications platform designed from the ground up to solve these problems.

Splicious helps you

We’re not coming to you with just a vision or a bright idea. Other teams have underestimated just how hard this problem is. The Splicious team has worked for 4.5 years to bring you this offering. We know it can be prettier. We know it can be easier to use on mobile platforms like cell phones. We know there are a million ways it can be improved. That’s all a part of rolling out the network for your use. And we would love to do that work. 

We just need to know if the world we imagine is the world you want. That’s why we’re asking for your help to fund this phase of the project. Here’s Mike Stay to tell you how we plan to roll out the network and how you can help us.

by leithaus ( at May 14, 2014 01:23 PM

May 12, 2014

Decyphering Glyph


Some things, one writes because one has something one wants the world to know.

Some things, one writes because, despite one’s better judgement, one can’t help it.

This is one of the latter type.

I have a recurring dream. I mean this in the most literal sense possible. I do not mean that I have an aspiration, or a hope. This is literally a dream that I’ve had, while I was asleep, multiple times. I’m not really sure why, since the apparent stimulus that this dream is responding to is many, many years in my past. Nevertheless, I have had it 3 or 4 times now.

It’s a movie trailer.

There’s a man driving a pickup truck along a highway near open plains.

Intercut are images of him driving a tractor over an apparently endless field, shot from a fixed 3rd-person perspective behind the tractor, driving towards the horizon. Close up of his face. He looks bored, resigned.

Image Credit: Ben Smith

He sees a falling star streaking down from the sky. As he watches, there’s a roaring noise, and a boom, and it crashes down into an adjacent field.

Cut to him cautiously exploring the crash site. There’s an object in the middle of the crash site, still glowing red hot. It’s a strange shape; conical, about a meter long, but with an indentation on its side and various protrusions, one of which looks like a human-sized handle.

Another cut, now he’s inside the truck again, and we can see that the object is under a tarp in the back of the truck.

Now he’s in his garage. He’s wearing a red work shirt and blue jeans. The object is hoisted up on a lift, and he’s walking around it, examining it. A prolonged sequence of him trying to cut it open with a plasma arc cutter, then wiping off the smudge to reveal it’s still shiny.

Finally, he lowers the lift, but as it goes towards the ground, the object beeps and stops dropping, at about waist height. It slowly rotates, swiveling to point towards the open garage door.

He walks over to the object and stands next to it, his hip near the indentation in its side, and he reaches out to grab the handle. The machine makes a powering-on whine and more protrusions start folding out of its sides; he yanks his hand back like he’s been burned, and it folds back into itself. More cautiously this time, he reaches out to grab the handle.

The machine’s tip opens like a claw, revealing a central spike surrounded by a bunch of curved radiating anntennae. Sparks start shooting between the antennae and the spike, charging it. A view screen unfolds, initially displaying a targetting reticle pointed out of his garage door, then switching to text in an alien language. A grainy voice comes out of speakers in the side of the machine, initially speaking the alien language. Then the language on the screen switches to Japanese, then to French, then to German, and finally to english. In large, flashing, block letters, it reads, and the grainy voice from its speakers says, simply:


Before the man has a chance to the react, the indentation in the side of the machine extrudes several parts of a metallic harness which surround his body. A thruster on the back part of the machine starts glowing a dull red color, and then the machine (with the man in tow) shoots out of his garage.

Initially, he is running furiously to keep up. He trips over a rock in the road, but the harness allows him to sort of gyroscopically spin while still attached to the machine, and then right himself again. Eventually he pulls back on the handle and he’s flying through the air, and then the protrusion on the nose of the machine shoots a beam in front of him and opens a portal, which he then flies through.

Space Harrier

The machine says “Welcome to the fantasy zone.” and then “Get ready!” again, and then we see a montage of him flying over alien landscapes, primarily plains checkered in garish primary colors, shooting giant plasma bullets out of the nose of the weapon at trees, tiny fighter jets, rocks, and basically anything that gets in his way. There are scenes of him talking to giant one-eyed mammoths, and a teaser-trailer shot of giant, horrible looking two-headed dragons.


Then I wake up.

Sega: please, please get someone to make this movie. The dream is only ever the trailer, and I really want to see how it ends.

by Glyph at May 12, 2014 01:32 AM

May 10, 2014


What do you lead with in a new offering? How about what enables us to improve our connection?

The Splicious team has wrestled with what to lead with in attempting to get their message out to more people. Recently, someone gave us some outlier feedback that we felt was well worth discussing. He suggested that the biggest differentiator of the Splicious offering was our approach to distributed identity. Of all the engagement from potential users, his is effectively the lone voice. What we've gotten back, repeatedly, from user engagement is that there's a pretty big bifurcation in the market.
  • Those that are interested in the use of cryptocurrency to increase their ability to trade with others.
  • Those that are concerned about privacy and identity on the Internet.
If our sample size is representative, the first category is many times larger than the second. The second actually decomposes into two subcategories: the hacker/cryptocommunity and the wealthy. Both groups are exceedingly small in numbers. The average person is happy to make wry jokes about the Snowden revelations on FaceBook, but that is more or less the extent of the engagement (and implied level of concern) we are able to observe in most folks. 

Clearly, many people have an in-built urge to share their observations, insights, thoughts and feelings and to feel received through a simple feedback mechanism, an expression of sentiment, such as clicking the "like" or "+1" button. Basically, people have an urge to connect and feel connected. The social media matches this urge relatively well. 

What's especially sticky in the modern social media experience is the ratio of attention to reward. For a small output of attention (read a short post, click the like button; make a short post, check to see who's liked it) people get an enormous pay-off: they feel connected to community. We hypothesize this provides an important indicator for how to take this a little farther: make small increases on the demand for attention at the point of contact.

The feedback we've gotten from people who do engage is that extending the basic social media framework with a small delta, where the action of expressing sentiment is linked to the transfer of a small amount of money -- like a mBTC (roughly 50c) or smaller -- resonates. Once this notion lands people can see other opportunities for themselves: crowdfunding of very simple and small activities, like writing a song, or taking a photo; microfinancing a small project; and, more generally, engaging in a distributed marketplace.

While much of the hard work has been about 
  • making the platform truly distributed, 
  • addressing privacy and security, 
  • working out a mathematically sound model that informs: 
    • the programming model, 
    • the architecture, and 
    • means to express, reason about, and mechanically enforce privacy and security policies
That's not interesting to most people at this point in history. Eventually, it may be; but, to be honest, if all of that work remained as invisible to most people as how their car, or their plumbing, or computer works, i would be perfectly happy. Right now, based on our limited engagement with the market, it seems to us that people are much more interested in how to get more from the experience of connecting. i think there's a great deal of wisdom in that, personally. We are going to have to come together as a global community if we are going to successfully face the challenges coming our way. That, in a nutshell, is why we lead with Splicious for the creative.

by leithaus ( at May 10, 2014 03:32 PM

May 08, 2014

Decyphering Glyph

The Report Of Our Death


Lots of folks are very excited about the Tulip project, recently released as the asyncio standard library module in Python 3.4.

This module is potentially exciting for two reasons:

  1. It provides an abstract, language-blessed, standard interface for event-driven network I/O. This means that instead of every event-loop library out there having to implement everything in terms of every other event-loop library’s ideas of how things work, each library can simply implement adapters from and to those standard interfaces. These interfaces substantially resemble the abstract interfaces that Twisted has provided for a long time, so it will be relatively easy for us to adapt to them.
  2. It provides a new, high-level coroutine scheduler, providing a slightly cleaned-up syntax (return works!) and more efficient implementation (no need for a manual trampoline, it’s built straight into the language runtime via yield from) of something like inlineCallbacks.

However, in their understandable enthusiasm, some observers of Tulip’s progress – links withheld to protect the guilty – have been forecasting Twisted’s inevitable death, or at least its inevitable consignment to the dustbin of “legacy code”.

At first I thought that this was just sour grapes from people who disliked Twisted for some reason or other, but then I started hearing this as a concern from users who had enjoyed using Twisted.

So let me reassure you that the idea that Twisted is going away is totally wrong. I’ll explain how.

The logic that leads to this belief seems to go like this:

Twisted is an async I/O thing, asyncio is an async I/O thing. Therefore they are the same kind of thing. I only need one kind of thing in each category of thing. Therefore I only need one of them, and the “standard” one is probably the better one to depend on. So I guess nobody will need Twisted any more!

The problem with this reasoning is that “an async I/O thing” is about as specific as “software that runs on a computer”. After all, Firefox is also “an async I/O thing” but I don’t think that anyone is forecasting the death of web browsers with the release of Python 3.4.

Which Is Better: OpenOffice or Linux?

Let’s begin with the most enduring reason that Twisted is not going anywhere any time soon. asyncio is an implementation of a transport layer and an event-loop API; it can move bytes into and out of your application, it can schedule timed calls to happen at some point in the future, and it can start and stop. It’s also an implementation of a coroutine scheduler; it can interleave apparently sequential logic with explicit yield points. There are also some experimental third-party extension modules available, including an event-driven HTTP server and client, and the community keeps building more stuff.

In other words, asyncio is a kernel for event-driven programming, with some applications starting to be developed.

Twisted is also an implementation of a transport layer and an event-loop API. It’s also got a coroutine scheduler, in the form of inlineCallbacks.

Twisted is also a production-quality event-driven HTTP server and client including its own event-driven templating engine, with third-party HTTP addons including server microframeworks, high-level client tools, API construction kits, robust, two-way browser communication, and automation for usually complex advanced security features.

Twisted is also an SSH client, both an API and a command-line replacement for OpenSSH. It’s also an SSH server which I have heard some people think is OK to use in production. Again, the SSH server is both an API and again as a daemon replacement for OpenSSH. (I'd say "drop-in replacement" except that neither the client nor server can parse OpenSSH configuration files. Yet.)

Twisted also has a native, symmetric event-driven message-passing protocol designed to be easy to implement in other languages and environments, making it incredibly easy to develop a custom protocol to propagate real-time events through multiple components; clients, servers, embedded devices, even web browsers.

Twisted is also a chat server you can deploy with one shell command. Twisted is also a construction kit for IRC bots. Twisted is also an XMPP client and server library. Twisted is also a DNS server and event-driven DNS client. Twisted is also a multi-protocol integrated authentication API. Twisted is also a pluggable system for creating transports to allow for third-party transport components to do things like allow you to run as a TOR hidden service with a single command-line switch and no modifications to your code.

Twisted is also a system for processing streams of geolocation data including from real hardware devices, via serial port support.

Twisted also natively implements GUI integration support for the Mac, Windows, and Linux.

I could go on.

If I were to include what third-party modules are available as well, I could go on at some considerable length.

The point is, while Twisted also has an existing kernel – largely compatible, at a conceptual level at least, with the way asyncio was designed – it also has a huge suite of functionality, both libraries and applications. Twisted is OpenOffice to asyncio’s Linux.

Of course this metaphor isn’t perfect. Of course the nascent asyncio community will come to supplant some of these things with other third-party tools. Of course there will be some duplication and some competing projects. That’s normal, and even healthy. But this is not a winner-take all existential Malthusian competition. Being able to leverage the existing functionality within the Twisted and Tornado ecosystems – and vice versa, allowing those ecosystems to leverage new things written with asyncio – was not an accident, it was an explicit, documented design goal of asyncio.

Now And Later

Python 3 is the future of Python.

While Twisted still has a ways to go to finish porting to Python 3, you will be able to pip install the portions that already work as of the upcoming 14.0 release (which should be out any day now). So contrary to some incorrect impressions I’ve heard, the Twisted team is working to support that future.

(One of the odder things that I’ve heard people say about asyncio is that now that Python 3 has asyncio, porting Twisted is now unnecessary. I'm not sure that Twisted is necessary per se – our planet has been orbiting that cursed orb for over 4 billion years without Twisted’s help – but if you want to create an XMPP to IMAP gateway in Python it's not clear to me how having just asyncio is going to help.)

However, while Python 3 may be the future of Python, right now it is sadly just the future, and not the present.

If you want to use asyncio today, that means foregoing the significant performance benefits of pypy. (Even the beta releases of pypy3, which still routinely segfault for me, only support the language version 3.2, so no “yield from” syntax.) It means cutting yourself off from a significant (albeit gradually diminishing) subset of available Python libraries.

You could use the Python 2 backport of Tulip, Trollius, but since idiomatic Tulip code relies heavily on the new yield from syntax, it’s possible to write code that works on both, but not idiomatically. Trollius actually works on Python 3 as well, but then you miss out on one of the real marquee features of Tulip.

Also, while Twisted has a strict compatibility policy, asyncio is still marked as having provisional status, meaning that unlike the rest of the standard library, its API may change incompatibly in the next version of Python. While it’s unlikely that this will mean major changes for asyncio, since Python 3.4 was just released, it will be in this status for at least the next 18 months, until Python 3.5 arrives.

As opposed to the huge laundry list of functionality above, all of these reasons will eventually be invalidated, hopefully sooner rather than later; if these were the only reasons that Twisted were going to stick around, I would definitely be worried. However, they’re still reasons why today, even if you only need the pieces of an asynchronous I/O system that the new asyncio module offers, you still might want to choose Twisted’s core event loop APIs. Keep in mind that using Twisted today doesn’t cut you off from using asyncio in the future: far from it, it makes it likely that you will be able to easily integrate whatever new asyncio code you write once you adopt it. Twisted’s goal, as Laurens van Houtven eloquently explained it this year in a PyCon talk, is to work with absolutely everything, and that very definitely includes asyncio and Python 3.

My Own Feelings

I feel like asyncio is a step forward for Python, and, despite the dire consequences some people seemed to expect, a tremendous potential benefit to Twisted.

For years, while we – the Twisted team – were trying to build the “engine of your Internet”, we were also having to make a constant, tedious sales pitch for the event-driven model of programming. Even today, we’re still stuck writing rambling, digressive rants explaining why you might not want three threads for every socket, giving conference talks where we try to trick the audience into writing a callback, and trying to explain basic stuff about networking protocols to an unreceptive, frustrated audience.

This audience was unreceptive because the broader python community has been less than excited about event-driven networking and cooperative task coordination in general. It’s a vicious cycle: programmers think events look “unpythonic”, so they write their code to block. Other programmers then just want to make use of the libraries suitable to their task, and they find ones which couple (blocking) I/O together with basic data-processing tasks like parsing.

Oddly enough, I noticed a drop in the frequency that I needed to have this sort of argument once node.js started to gain some traction. Once server-side Python programmers started to hear all the time about how writing callbacks wasn't a totally crazy thing to be doing on the server, there was a whole other community to answer their questions about why that was.

With the advent of asyncio, there is functionality available in the standard library to make event-driven implementations of things immediately useful. Perhaps even more important this functionality, there is guidance on how to make your own event-driven stuff. Every module that is written using asyncio rather than io is a module that at least can be made to work natively within Twisted without rewriting or monkeypatching it.

In other words, this has shifted the burden of arguing that event-driven programming is a worthwhile thing to do at all from Twisted to a module in the language’s core.

While it’ll be quite a while before most Python programmers are able to use asyncio on a day to day basis its mere existence justifies the conceptual basis of Twisted to our core consituency of Python programmers who want to put an object on a network. Which, in turn, means that we can dedicate more of our energy to doing cool stuff with Twisted, and we can dedicate more of the time we spend educating people to explaining how to do cool things with all the crazy features Twisted provides rather than explaining why you would even want to write all these weird callbacks in the first place.

So Tulip is a good thing for Python, a good thing for Twisted, I’m glad it exists, and it doesn't make me worried at all about Twisted’s future.

Quite the opposite, in fact.

by Glyph at May 08, 2014 08:00 AM

May 06, 2014



This is one source of documentation for TiddlyWeb. It is in a constant state of not yet done. If you would like to help with the documentation please leave a comment with your tiddlyspace username. See also and

TiddlyWeb is an open source project providing a Python-based web service for storing and accessing tiddlers on the web. With an extensive set of plugins and a flexible architecture it can be used for all kinds of things including TiddlyWebWiki, TiddlySpace and TiddlyHoster.

TiddlyWeb is a production of UnaMesa, Osmosoft and Peermore Ltd. under a BSD License.

Copyright 2008-2013 UnaMesa Association About

by cdent at May 06, 2014 03:21 PM


Splicious: the role of cryptocurrency in our transition to a new economics

Yesterday, Frank Sheldon, author of Far from the Sea We Know, sent the Splicious team an article about Dogecoin as an alternative to Bitcoin. Mike Stay, one of the developer's on splicious, also brought up the possibility of using Dogecoin for many of the reasons indicated in the article Frank sent.
Our real position at splicious is that cryptocurrency, as a general category of technology, is going to be a big part of the transition to the new economics. As such splicious will support not just BitCoin (BTC), but also others and especially those that attend to privacy and security, like ZeroCoin, as well as ease of use, like Dogecoin.

We ended up choosing BTC as the first cryptocurrency we support largely because of its visibility. Specifically, we felt that if Zuckerberg makes a very public announcement about BitCoin, and Reddit directly supports BitCoin, etc, then this was the best first proxy for the phenomenon. Perhaps this was a lapse in moral judgment on our part.

The challenge we face is that there is so much to explain in our offering -- from that value of the network being distributed; to the value of distributed identity; to the value of the underlying mathematics; to the value of reciprocal maintenance; and how existing capital instruments are now failing to meet this, their designated purpose; and the role of cryptocurrency in being able to realize at least some aspects of this in a functioning Internet-based technology -- that we have to cut something somewhere. Explaining what Dogecoin is and how it compares to BTC and how cryptocurrency, in general, is part of a much larger transition we are undergoing, as a culture, felt like a step too far.

Instead, we were hoping to use a little metonymy, both in our message and in our initial coding efforts. On the other hand, much of what is happening with Dogecoin feels directionally correct. We're very grateful to Frank for reminding us of that! If you like what you're reading, head on over to our crowd funding site and contribute today!

by leithaus ( at May 06, 2014 02:24 PM

May 05, 2014


Splicious - your network, your data

It's been a very long time since i posted here. Part of the reason is that for the last four years i've been working with increasing intensity on an exciting project. It's been really fun working on splicious and especially fun to think about what it might mean for communications on the Internet; but, until it reached a certain level of maturity, none of us on the project felt we could really talk very publicly about what we were doing.  Now, we feel we're ready to engage.

Splicious is a distributed social communications platform that provides three core benefits
We feel this directly addresses two social trends we have been observing with some alarm.
Knowing that the corporate exchange of personal information (including personally identifiable information) is a >9B$ market, we feel it should be possible to turn the whole model on its head. We put a novel social media UI on the front end of our communications platform. This UI lets people decide for themselves how much that information is worth to them, and using BitCoin, lets them directly support people who provide them with information they find valuable.

Now we're at the most exciting stage of the project. We've invested 4.5 years of our lives into this effort. The collective capital investment exceeds 2M$. Now we need to see if we're on the mark: if there are enough people who feel that there is a problem worth addressing here; and that splicious comes close enough to the mark that a sufficient number are willing to get behind this solution. That's why really we're running a crowd funding campaign on RocketHub. When enough of us come together to work on something we believe in, that's when change happens.

The reason we've structured the ask in the way we have is due to the high quality bar that splicious has to meet. The fact that a user can search all the communications they have access to; and they can send and receive BitCoin as support for sharing content means that users can suffer a much greater risk if the system fails to deliver as promised. This is why we have to roll the service out in a very careful, step-by-step fashion and why we are only allowing 1000 users initially. That's also why we are asking for new crowd funding at each 10-fold increase in the number of users.

by leithaus ( at May 05, 2014 11:59 AM

May 02, 2014

Greg Linden

More quick links

More of what has caught my attention lately:
  • Excellent history of Google's love-hate relationship with management (hint: it's mostly hate) ([1])

  • Excellent BBC documentary on, plenty of fun historical tidbits, quite critical in parts, very well done ([1])

  • Excellent charts on the history of wealth concisely summarizing three centuries ([1])

  • Compelling example of virtual tourism ([1])

  • Visually stunning math concepts which are easy to explain ([1])

  • "The most interesting things are happening at the intersection of two fields" ([1])

  • "The largest driver of Facebook’s mobile revenue is app-install ads ... largely purchased by free-to-play game publishers such as King (maker of Candy Crush Saga) and Big Fish Games (the Bejeweled series) ... to target the small percentage of players who will spend hundreds of dollars on in-app purchases." ([1] [2])

  • "When you subtract out the value of Yahoo's stake in Alibaba, the rest of Yahoo is worthless. Indeed, it has negative worth." ([1] [2] [3])

  • "Newspaper print ad revenue has declined 73% in 15 years" ([1])

  • "Microsoft is backtracking on practically every part of the Windows 8 interface that developers abhorred" ([1] [2] [3] [4])

  • Survivor bias in perceptions of startup life and what might be closer to reality: "It's a decision to throw away a large chunk of your precious youth at a venture which is almost certain to fail" ([1] [2])

  • "Many hospitals in the US still use Windows XP on workstations and healthcare devices" ([1])

  • "It's no longer realistic to think that routers, DVRs, or other Internet-connected home appliances aren't worth an attacker's time ... poorly designed 'Internet of Things' devices ... [are] particularly easy to hack" ([1])

  • Newegg exec on patent trolls: "Why those asshats continue to trade at ANY value, I do not know. The world would be a better place without them." ([1])

  • I still don't understand why more tech companies don't provide free food ([1] [2])

  • Humor: "The pain of being the only engineer in a business meeting" ([1])

by Greg Linden ( at May 02, 2014 02:25 PM

Blaine Buxton

Pharo 3.0 Projects - JavaLoader and iTunes Parser

I've been inspired recently with the new release of Pharo 3.0. I updated my JavaLoader project and added a brand new project called iTunes Parser. The JavaLoader project is an old project that can load serialized java objects and Java classes. The iTunes Parser can load the xml iTunes library file. Check them out if you use Pharo 3.0 and give me any feedback.

by Blaine Buxton ( at May 02, 2014 12:58 PM

April 28, 2014

Bret Victor

one-day course on May 6

Edward Tufte, Mike Bostock, and Jonathan Corum, and I will be teaching a class on data visualization.

by Bret Victor at April 28, 2014 06:48 AM

April 26, 2014


Trilema 2014

I had the honour of being invited to Trilema 2014 in Timișoarawhere I exhibited a few completed gadgets.

Mircea Popescu - friend, co-author, host of the party.

Left: Mircea Popescu - friend, my project co-author, host of the party. RNG is running.

Popescu - helping me tell the story behind the exhibits.

Popescu - helping to tell the story behind the exhibits. And happily philosophizing about many other things.

Audience - other guests, who can name themselves if they feel like it.

Audience - other guests, who can name themselves if they feel like it.

More pictures at Mr. P’s site.

by Stanislav at April 26, 2014 10:58 PM

April 11, 2014

Simplest Thing (Bill Seitz)

The SimplestThing isn't necessarily an Easy thing


imagine you took your day, which has limited time, and stopped doing all the little things.

Imagine you focused on the hard, effective things. You could spend 10 minutes meditating, an hour doing the hard important tasks that improved your career or business. Another 20 minutes having a difficult conversation, another 20 improving your finances. Another 30 doing two heavy barbell lifts. Another 30 minutes preparing whole foods for your day’s meals.

That’s less than 3 hours of your day, but you’d improve productivity, your business, your finances, your relationship, mindfulness, your health and appearance.

April 11, 2014 02:46 PM

April 02, 2014

Greg Linden

Quick links

What has caught my attention lately:
  • Dilbert on A/B testing: "Bend to my will and choose the orange button, you mindless click-puppets!" ([1])

  • Major performance increases on smartphones are disappearing, which will slow sales and reduce revenues ([1] [2])

  • Price war in cloud services ([1])

  • On Facebook buying Oculus: "The dominant reaction to the move could be summed up in three letters: WTF" ([1] [2])

  • Remember this? "Companies could cause their stock prices to increase by simply adding an 'e-' prefix to their name or a '.com' to the end, which one author called 'prefix investing'" ([1] [2])

  • VCs favor pitches from attractive men ([1] [2])

  • "We've known for a while that email providers could look into your inbox, but the assumption was that they wouldn't" ([1] [2])

  • Bad new trend: Apps that covertly mine Bitcoins for someone else ([1] [2])

  • More companies should do this: Run large scale surveys of employees to discover what makes people happy and productive ([1])

  • Combining dissimilar fields is hard, but can also lead to discovering lots of low hanging fruit (at least from where you are standing) that no one else has picked ([1])

  • Good idea from a recent Google paper: Mine the web to build up knowledge of objects that are likely and unlikely to co-occur, then use that to accept or reject candidates during object recognition ([1] [2])

  • Cool throwback idea from a recent MSR paper: Old school circuit-switched networks in the data center using cheap commodity FPGAs ([1] [2])

  • “There doesn't need to be a protective shell around our researchers where they think great thoughts" ([1] [2])

  • Surprisingly compelling results: Generate likely 3D models of facial appearance solely from DNA ([1] [2])

  • Stem cells used to grow strong muscles that repair themselves when damaged ([1])

  • The ancient Greeks and Persians had to occasionally fight off lions ([1] [2])

  • Great visualization of conditional probability ([1])

  • Galleries of hilariously useless items ([1] [2])

by Greg Linden ( at April 02, 2014 09:42 AM

March 27, 2014

Decyphering Glyph

Panopticon Rift

Greg Price expresses a common opinion here:

I myself had a similar reaction; despite not being particularly invested in VR specifically (I was not an Oculus backer) I felt pretty negatively about the fact that Facebook was the acquirer. I wasn't quite sure why, at first, and after some reflection I'd like to share my thoughts with you on both why this is a bad thing and also why gamers, in particular, were disturbed by it.

The Oculus Rift really captured the imagination of the gaming community's intelligentsia. John Carmack's imprimatur alone was enough to get people interested, but the real power of the Oculus was to finally deliver on the promise of all those unbelievably clunky virtual reality headsets that we've played around with at one time or another.

Virtual Boy

The promise of Virtual Reality is, of course, to transport us so completely to another place and time that we cease to even be aware of the real world. It is that experience, that complete and overwhelming sense of being in an imagined place, that many gamers are looking for when they sit down in front of an existing game. It is the aspiration to that experience that makes "immersive" such a high compliment in game criticism.

Personally, immersion in a virtual world was the beginning of my real interest in computers. I've never been the kind of gamer who felt the need to be intensely competitive. I didn't really care about rules and mechanics that much, at least not for their own sake. In fact, the term "gamer" is a bit of a misnomer - I'm more a connoisseur of interactive experiences. The very first "game" I remember really enjoying was Zork. Although Zork is a goal-directed game you can "win", that didn't interest me. Instead, I enjoyed wandering around the environments it provided, and experimenting with different actions to see what the computer would do.

Computer games are not really just one thing. The same term, "games", encompasses wildly divergent experiences including computerized Solitaire, Silent Hill, Dyad, and Myst. Nevertheless, I think that pursuit of immersion – on really, fully, presently paying attention to an interactive experience – is a primary reason that "gamers" feel the need to distinguish themselves (ourselves?) from the casual computing public. Obviously, the fans of Oculus are among those most focused on immersive experiences.

Gamers feel the need to set themselves apart because computing today is practically defined by a torrential cascade of things that we're only barely paying attention to. What makes computing "today" different than the computing of yesteryear is that computing used to be about thinking, and now it's about communication.

The advent of social media has left us not only focused on communication, but focused on constant, ephemeral, shallow communication. This is not an accident. In our present economy there is, after all, no such thing as a "social media business" (or, indeed, a "search engine business"); there are only ad agencies.

The purveyors of social media need you to be engaged enough with your friends that you won't leave their sites, so there is some level of entertainment or interest they must bring to their transactions with you, of course; but they don't want you to be so engaged that a distracting advertisement would be obviously crass and inappropriate. Lighthearted banter, pictures of shiba inus, and shallow gossip are fantastic fodder for this. Less so are long, soul-searching long-form writing explaining and examining your feelings.

An ad for cat food might seem fine if you're chuckling at a picture of a dog in a hoodie saying "wow, very meme, such meta". It's less likely to drive you through to the terminus of that purchase conversion funnel if you're intently focused on supporting a friend who is explaining to you how a cancer scare drove home how they're doing nothing with their life, or how your friend's sibling has run away from home and your friend doesn't know if they're safe.

Even if you're a highly social extrovert, the dominant emotion that this torrent of ephemeral communication produces is, at best, sarcastic amusement. More likely, it produces constant anxiety. We do not experience our better selves when we're not really paying focused attention to anything. As Community Season 5 Episode 8 recently put it, somewhat more bluntly: "Mark Zuckerberg is Fidel Castro in flip-flops."

I think that despite all the other reasons we're all annoyed - the feelings of betrayal around the Kickstarter, protestations of Facebook's creepyness, and so on - the root of the anger around the Facebook acquisition is the sense that this technology with so much potential to reverse the Balkanization of our attention has now been put directly into the hands of those who created the problem in the first place.

So now, instead of looking forward to a technology that will allow us to visit a world of pure imagination, we now eagerly await something that will shove distracting notifications and annoying advertisements literally an inch away from our eyeballs. Probably after charging us several hundred dollars for the privilege.

It seems to me that's a pretty clear justification for a few hundred negative reddit comments.

by Glyph at March 27, 2014 09:25 AM

March 20, 2014

Simplest Thing (Bill Seitz)

Ramit Sethi's "I Will Teach You to be Rich"

Sethi’s target audience (and thus context and voice target) is single twenty-somethings. So this might feel wrong for you. But a lot of his content is still appropriate for parents.

March 20, 2014 02:52 PM

March 18, 2014

Ian Bicking

How We Use GitHub Issues To Organize a Project

On a couple projects I’ve used GitHub Issues as the primary technique to organize the project, and I’ve generally enjoyed it, but it took some playing around to come up with a process. You, reader, may also like this process, so I will describe it for you.

GitHub Issues has a slim set of features:

  • Issues, of course
  • Issues can be assigned
  • Issues belong to zero or one milestone
  • Issues can have any number of tags

This isn’t a lot to work with, so we had to play around a bit to figure out a good pattern.


We decided there was only a couple things we actually wanted to track:

  1. What are we doing right now?
  2. What aren’t we doing right now?
  3. What aren’t we sure about?

We have to regularly reevaluate where issues fit into these categories, so we break category 2 into:

2a. Stuff we’ll probably do soon

2b. Stuff we probably won’t do soon

We tried using labels for this but it was no good. There’s only a small number of queries you can do with labels, foiling any clever ideas. Instead we have milestones:

Stuff we are doing right now: this is the “main” milestone. We give it a name (like Alpha 2 or Strawberry Rhubarb Pie) and we write down what we are trying to accomplish with the milestone. We create a new milestone when we are ready for the next iteration.

Stuff we’ll probably do soon: this is a standing “Next Tasks” milestone. We never change or rename this milestone.

Stuff we probably won’t do soon: this is a standing “Blue Sky” milestone. We refer to these tickets and sometimes look through them, but they are easy to ignore, somewhat intentionally ignored.

What aren’t we sure about?: issues with no milestone.

We use a permanent “Next Tasks” milestone (as opposed to renaming it to “Alpha 3” or actual-next-iteration milestone) because we don’t want to presume or default to including something in the real next iteration. When we’re ready to start planning the next iteration we’ll create a new milestone, and only deliberately move things into that milestone.

The Triage Meeting

We use the triage process to organize many of our meetings. The issues give us our first run at an agenda.

Needs Discussion”

We have a label: needs discussion. We start each meeting by querying everything (regardless of milestone) that has that label, and discussing each item. Once we’re done discussing we usually remove the label, but sometimes we couldn’t decide whatever we wanted to decide and so we leave the label there, pushing the item to the next meeting.

It’s helpful to use “needs discussion” liberally. It might be for some big things, like we add a ticket “Decide who our target market is” — that’s kind of a big deal, and we might keep adding notes over the course of weeks. But it might just be a small issue where there seems to be some confusion or an open question.

These are small team meetings, so we aren’t trying to use this to close off discussion or restrict the agenda, and anyone can bring up a topic at any time. But if you want to be sure to talk about something then opening and labeling a ticket is as good a way as anything. Also the issues become our meeting notes (they aren’t date-organized meeting notes, but I seldom want date-organized meeting notes).

Issues without a milestone

GitHub lets you query all issues that have no milestone. It doesn’t let you query issues without a particular label, which made our label-based attempts at organization unproductive.

So next in the meeting we go through each item without a milestone, oldest first. Sometimes we skim, and if someone says “next tasks?” then usually the answer is “yes”.

Outside of a meeting if any of us sees something in Next Tasks or Blue Sky that we should do sooner, that person clears the milestone. It’s often better to clear the milestone than to just assign it to the current iteration, as it brings it in front of the entire team.

An exception is when someone breaks a big issue down into smaller tasks, or creates a dependent bug — then whoever does that usually assigns the milestone at the same time.

Starting a new iteration

When we’re ready to start a new iteration we first create a new milestone, and agree on our goal.

Sometimes if we’re not sure what our goal for the next iteration is we’ll create a ticket, “decide on goal for the next iteration” (and mark it needs-discussion of course).

We’ll go through the meeting as normal, then look at any open issues in the previous iteration. It’s pretty common that we move these issues to “Next Tasks” instead of the next iteration. Issues that didn’t get fixed often turned out to not be as important as we thought.

We sometimes go through Next Tasks, but it can also take too long to do that together. Part of me thinks we should slog through anyway, and use that as a chance to clean up Next Tasks — move stuff to Blue Sky, or close issues we no longer plan to address at all, or check for things that have already been fixed.

But instead one of us will often do an initial triage on Next Tasks, clearing the milestone for anything that might be worth looking at again. Also it’s a chance to close issues and find duplicates. It’s good to err on the side of clearing the milestone. It’s best not to assign a task directly to the next iteration because even talking about a ticket for a moment is useful.

Blue Sky unfortunately is a bit of a dumping ground. The only way issues emerge is when we search and happen upon something that was put into Blue Sky. Arguably we shouldn’t create a bucket for things we don’t want to pay attention to. But the issue list is also an idea collection area, and I like that. I’d like if we had a process to recover things from Blue Sky.


Sometimes we assign all the tickets in the current iteration. The primary purpose of assigning all the tickets is to identify the issues that no one is planning to address (and fix that).

I haven’t had a lot of luck using assignments for more than this. Something I wish I could do with an assignment is hand a task off to someone — like sometimes I can only finish half of an issue and I need someone else to finish the work (or vice versa). But unless that person is carefully watching what is assigned to them this won’t accomplish what I want. So I have to change the assignment and leave a comment. This leads to a lot of out-of-date chatter in the comments.

Generally I am unhappy with the notifications that GitHub provides. Are there third party products that can help here? Also it’s hard to know how someone else’s notifications are configured, so I don’t know what will trigger a response. Most people will occasionally catch some updates, leading to a false sense of security. (It would be great if I saw a list of everyone who was notified when I created or changed an issue.)

We have at times tried to use assignments to “claim” a ticket. I.e., use it as a declaration of intent to avoid two people working on the same thing. I did not find this productive, and it was hard to maintain.

In my Hotdish fantasy world I’d know who else in my team had looked at the ticket and when, and I could get a sense of what people were thinking about from that.

Maintaining the tickets

It’s important that tickets have proper titles. Sometimes I have left bad titles in place and it would cause repeated confusion — constantly asking myself and other people, what was this ticket about? GitHub has editable titles and you should use them. Editing the main ticket description for clear mistakes (broken links, s/now/not/, etc) is also useful.

Still issues have a limited lifetime. When the main description of the issue is no longer what you intend to implement it’s time to open a new issue, and close the original with a reference to the new issue. Long comment threads are not useful. Comments should indicate additions and clarifications, but when a comment changes the goal of the issue it’s too easy to skim over. Also when a debate happens in the comments and is resolved, it’s hard to separate the resolution from the debate, and a new ticket fixes that.

Sometimes there’s a collection of work that goes together. We tried two approaches, but neither stuck:

Create a label for the group of issues: in this model I might create a webrtc label and label a bunch of issues. This would help me figure out just how much work was left on that area and what bugs I should keep in mind as I look at a particular area of code, and maybe help find bugs or at least be sure I wasn’t forgetting about something.

It’s easy to think that you should start building a taxonomy of bugs. The default GitHub labels imply something like this (like “bug”, “duplicate”, “enhancement”). I would sometimes find myself using these just because they are there, but always ask: why did I bother? Then I started deleting these labels to remove the temptation. I want only actionable labels.

An aside: the labels duplicate, wontfix, and invalid are (a) unnecessary, and (b) socially dangerous. If something is a duplicate you can close it with the comment “Dup of #123”. Often that’s not exactly what happened though, you might say “rendered moot by #123” or “once #123 is fixed this won’t be an issue” or something specific. And wontfix is the worst, it’s like a way of telling someone to fuck off, but that you don’t even care enough to actually tell them why they should fuck off. If it’s a team member they won’t take offense, but it’s easy to fire it at some unwitting member of the public. Take the time to close the bug with a comment about why you won’t fix it. Invalid is like the vague passive aggressive combination of the other two. Maybe it’s because we like to imagine we’ll do some kind of reporting on these statuses, but I find myself pretty bored by history.

Create a master issue: in this case you create an issue that links to all the sub-issues that need to be tackled.

To do this you can’t just use backlinks (the link created whenever one issue references another): all backlinks look the same, and you can’t tell if a bug that references the master is a dependency of the master, or depends on the master, or just mentions it in passing.

Instead we list all the bugs in the main description of the master issue, and edit as necessary. You can use [ ] to make the bug list a checkable list. Unfortunately issue links don’t get crossed out when they are closed (but they should, that would be a very helpful improvement).

The master ticket works okay but feels like it requires a lot of manual record keeping, and it’s easy for your master issue to get out of sync with the sub-issues. Also as you add or remove items from the list of dependent bugs you have to edit the main ticket description (anything in the comments would be too easy to lose) and there aren’t notifications for those edits.

The result?

I’ve only tried this approach with a small dedicated team. It wouldn’t match the less consistent rhythm of an open source project, and I have no idea how it would work with a larger team (maybe okay?)

One attribute is notably and deliberately missing: priorities, severities, and time estimates. We played with these and I never knew what to do with them. There’s no single equation that determines what you should work on, and trying to create such an equation seems pointless — you probably won’t include everything you need to include, and if you do then it’s unlikely that all parameters are filled in accurately and so you can’t trust the results.

The goals for me, and the team, is to keep things somewhat organized, to remember important things, and to come to a shared understanding of what we’re trying to accomplish and how. Because of that last point some parts of the process deliberately force conversation where a tighter process wouldn’t need it. Specifically I don’t want important things expressed through labels because there’s no opportunity to discuss labels, they just appear and disappear. Issue trackers that encourage complex taxonomies of bugs make me worry that the taxonomy will be used to assert things about process that need to be communicated directly and explicitly and via a two-way channel.

GitHub’s permissive approach to editing (titles, descriptions, and comments) is reflective of the kind of process I also want to support: everyone should be able to do everything in the service of the project. It’s better to give people more power: it’s actually helpful if people can overreach because it is an opportunity to establish where the limits really are and what purpose those limits have. Most tools have a strict append-only approach which I do not like (though it does make notifications easier, and notifications are my greatest frustration with GitHub Issues).

From your experience with GitHub issues do you have ideas to add to this?

by Ian Bicking at March 18, 2014 09:26 AM

March 14, 2014

Blue Sky on Mars

A Day of Communication at GitHub

I've done a lot of posts and talks about how we work together at GitHub. The high-level discussion about how we manage to operate without managers in a 60% remote, 240-employee company is pretty interesting to me.

Just like code, though, sometimes the high-level is too high-level. Just show me the code, dammit. Sure, you work asynchronously, but what does that actually mean?

So, here's a look at most of the communication that happened at GitHub on one random recent day: February 4, 2014.


The vast majority of everything we do happens in chat. We feel that reading text is better communication than having meetings, so we use chat a lot. You'll drop into chat for everything from deploying, to getting help with your health insurance questions, to pasting your best animated gifs, to collaborating and troubleshooting bugs in real-time with your entire team.

GitHub's custom chat client

  • We still use (and love) Campfire.
  • We use 185 rooms at the moment.
  • We generated 29,168 lines of text on February 4.
  • Of that, Hubot accounted for 13,462 lines.
  • 468 images were pasted in.
  • Our top trafficked room that day was The .com Room, which is where we work on and deploy the main application. It saw 3,896 lines of conversation.
  • "Fuck" was said 84 times that day, with Hubot leading the pack with 14 utterances (our robot has a bit of a potty mouth).
  • We deployed various apps and services 544 times.


Team is an internal app that we've built for high-level communication in the company. What are people working on today? or What shipped today? are questions that Team hopes to help answer. Think of it as an internal Twitter and blog for other employees in the company.

GitHub's Team app

  • Twelve employees posted twelve status updates. Some examples of what a status update can look like:
  • Jessica Lord posted a status about a Node module she wrote:
  • Brandon Keepers had been working on some of our open source projects and posted about shutting down Grit, our Ruby library to access Git, in favor of pointing users towards rugged, our libgit2 Ruby library:
  • Dirkjan Bussink announced that he shipped some timezone improvements across (namely tooltips, graphs, and merge commits switched over to rendering in your current timezones):
  • Wynn Netherland posted a photo from the assignment his daughter did in class. Octocats and awwwwwwws were had:

As you can see, it's a mix. That's the point. We want some of this to be about work, of course, but sometimes it's just nice to hear about what our coworkers are working on, or what their families are up to, or just have a quick conversation about something that happened in the industry that day. None of this is mandatory; it's just a way to get a handle on what's going on in the company.


Email is totally different per person, since we do so much through and everyone can set up their notifications differently.

Something I did want to point out is that there was only one email sent out to the entire company at large: it was an email about next summer's all-hands summit.

Even that one email is fairly uncommon; we tend to use Team and more for announcements, specifically because we can limit them by team (i.e., mentioning @github/designers or just a single @holman in an issue, for example). Recipients can also mute the thread once they're no longer interested in the discussion. Emails sent to everyone in the company have a tendency to turn into a disaster, where everyone replies-to-all and the train falls off the track. We avoid that by using different tools that let us have more control over the discussion.


It's no surprise that, at GitHub, we use GitHub to build GitHub. We dogfood our product constantly. We're also in the unique position of being able to be fully certain that every single employee has a GitHub account, which means we can use GitHub for everything from code to press releases to employee equipment provisioning to discussion of techno music.

We use GitHub a lot. Here's how:

  • We pushed 1,165 times to repositories under the @github organization. Each of these pushes can include multiple commits, so we probably ended up committing a bunch of stuff.
  • We opened 320 new issues across the @github organization account.
  • We created 186 new pull requests across the @github organization account. 145 of those were merged that same day.
  • We created one new repository under the @github organization account.

Your tools make the difference

So that's how we communicate. After the people we hire themselves, I think the next important aspect of our communication is the tools we use on a daily basis. In the event a particular tool doesn't work for us, we'll move on to something else or build our own.

It's worth thinking about this stuff frequently. Are you still working well together today, even though you're double the size than six months ago? Are you happy communicating with your team, or does it kind of make people angry? Angry's not cool. It's not the eighties anymore. It's the nineties, and all this matters.

March 14, 2014 12:00 AM

March 03, 2014

Ian Bicking

Towards a Next Level of Collaboration

With TogetherJS we’ve been trying to make a usable tool for the web we have, and the browsers we have, and the web apps we have. But we’re also accepting a lot of limitations.

For a particular scope the limitations in TogetherJS are reasonable, but my own goals have been more far-reaching. I am interested in collaboration with as broad a scope as the web itself. (But no broader than the web because I’m kind of biased.) “Collaboration” isn’t quite the right term — it implies a kind of active engagement in creation, but there’s more ways to work together than collaboration. TogetherJS was previously called TowTruck, but we wanted to rename it to something more meaningful. While brainstorming we kept coming back to names that included some form of “collaboration” but I strongly resisted it because it’s such a mush-mouthed term with too much baggage and too many preconceptions.

When we came up with “together” it immediately seemed right. Admittedly the word feels a little cheesy (it’s a web built out of hugs and holding hands!) but it covers the broad set of activities we want to enable.

With the experience from TogetherJS in mind I want to spend some time thinking about what a less limited tool would look like. Much of this has become manifest in Hotdish, and the notes below have informed its design.

Degrees of collaboration/interaction

Intense collaboration is cool, but it’s not comprehensive. I don’t want to always be watching over your shoulder. What will first come to mind is privacy, but that’s not interesting to me. I would rather address privacy by helping you scope your actions, let you interact with your peers or not and act appropriately with that in mind. I don’t want to engage with my collaborators all the time because it’s boring and unproductive and my eyes glaze over. I want to engage with other people appropriately: with all the intensity called for given the circumstances, but also all the passivity that is also sometimes called for.

I’ve started to think in terms of categories of collaboration:

1. Asynchronous message-based collaboration

This includes email of course, but also issue trackers, planning tools, any notification system. If you search for “collaboration software” this is most of what you find, and much of the innovation is in representing and organizing the messages.

I don’t think I have any particularly new ideas in this well-explored area. That’s not to say there aren’t lots of important ideas, but the work I want to do is in complementing these tools rather than competing with them. But I do want to note that they exist on this continuum.

2. Ambient awareness

This is the awareness of a person’s presence and activity. We have a degree of this with Instant Messaging and chat rooms (IRC, Campfire, etc). But they don’t show what we are actively doing, just our presence or absence, and in the case of group discussions some of what we’re discussing with other people.

Many tools that indicate presence also include status messages which would purport to summarize a person’s current state and work. I’ve never worked with people who keep those status messages updated. It’s a very explicit approach. At best it devolves into a record of what you had been doing.

A more interesting tool to make people’s presence more present is Sqwiggle, a kind of always-on video conference. It’s not exactly always-on, there is a low-fidelity video with no audio until you start a conversation with someone and it goes to full video and audio. This way you know not only if someone is actually sitting at the computer, but also if they are eating lunch, if they have the furrowed brows of careful concentration, or are frustrated or distracted. Unfortunately most people’s faces only show that they are looking at a screen, with the slightly studious but mostly passive facial expressions that we have when looking at screens.

Instant messaging has grown to include an additional the presence indicator: I am currently typing a response. A better fidelity version of this would indicate if I am typing right now, or if I forgot I started typing and switched tabs but left text in the input box, or if I am trying hard to compose my thoughts (typing and deleting), or if I’m pasting something, or if I am about to deliver a soliloquy in the form of a giant message. (Imagine a typing indicator that gives a sense of the number of words you have typed but not sent.)

I like that instant messaging detects your state automatically, using something that you are already engaged with (the text input box). Sqwiggle has a problem here: because you aren’t trying to project any emotions to your computer screen, Sqwiggle catches expressions that don’t mean anything. We can engage with our computers in different ways, there’s something there to express, it’s just not revealed on our faces.

I’d like to add to the activity indicators we have. Like the pages (and web apps) you are looking at (or some privacy-aware subset). I’d like to show how you are interacting with those pages. Are you flopping between tabs? Are you skimming? Scrolling through in a way that shows you are studying the page? Typing? Clicking controls?

I want to show something like the body language of how you are interacting with the computer. First I wondered if we could interpret your actions and show them as things like “reading”, “composing”, “being pissed off with your computer”, etc. But then I thought more about body language. When I am angry there’s no “angry” note that shows up above my head. A furrowed brow isn’t a message, or at least mostly not a message. Body language is what we read from cues that aren’t explicit. And so we might be able to show what a person is doing, and let the person watching figure out why.

3. Working in close parallel

This is where both people (or more than 2 people) are actively working on the same thing, same project, same goal, but aren’t directly supporting each other at every moment.

When you’ve entered into this level of collaboration you’ve both agreed that you are working together — you’re probably actively talking through tasks, and may regularly be relying on each other (“does what I wrote sound right?” or “did you realize this test is failing” etc). A good working meeting will be like this. A bad meeting would probably have been better if you could have stuck to ambient awareness and promoted it to a more intense level of collaboration only as needed.

4. Working directly

This is where you are both locked on a single task. When I write something and say “does what I wrote sound right?” we have to enter this mode: you have to look at exactly what I’m talking about. In some sense “close parallel” may mean “prepared to work directly”.

I have found that video calls are better than audio-only calls, more than I would have expected. It’s not because the video content is interesting. But the video makes you work directly, while being slightly uncomfortable so you are encouraged to acknowledge when you should end the call. In a way you want your senses filled. Or maybe that’s my propensity to distraction.

There’s a lot more to video calls than this (like the previously mentioned body language). But in each feature I suspect there are parallels in collaborative work. Working directly together should show some of the things that video shows when we are focused on a conversation, but can’t show when we are focusing on work.

5. Demonstrating to another person

This is common for instruction and teaching, but that shouldn’t be the only case we consider. In Hotdish we have often called it “presenting” and “viewing”. In this mode someone is the driver/presenter, and someone is the passenger/viewer. When the presenter focuses on something, you want the viewer to be aware of that and follow along. The presenter also wants to be confident that the viewer is following along. Maybe we want something like how you might say “uh huh” when someone is talking to you — if a listener says nothing it will throw off the talker, and these meaningless indications of active listening are important.

Demonstration could just be a combination of direct work and social convention. Does it need to be specially mediated by tools? I’m not sure. Do we need a talking stick? Can I take the talking stick? Are these interactions like a conversation, where sometimes one person enters into a kind of monologue, but the rhythm of the conversation will shift? If we focus on the demonstration tools we could miss the social interactions we are trying to support.

Switching modes

Between each of these styles of interaction I think there must be some kind of positive action. A natural promotion of demotion of your interaction with someone should be mutual. (A counter example would be the dangling IM conversation, where you are never sure it’s over.)

At the same time, the movement between modes also builds your shared context and your relationship with the other person. You might be proofing an article with another person, and you say: “clearly this paragraph isn’t making sense, let me just rewrite it, one minute” — now you know you are leaving active collaboration, but you also both know you’ll be reentering it soon. You shouldn’t have to record that expectation with the tool.

I’m reluctant to put boundaries up between these modes, I’d rather tools simply inform people that modes are changing and not ask if they can change. This is part of the principles behind Defaulting To Together.


At least in the context of computers we often have strong notions of ownership. Maybe we don’t have to — maybe it’s because we have to hand off work explicitly, and maybe we have to hand off work explicitly because we lack fluid ways to interact, cooperate, delegate.

With good tools in hand I see “ownership” being exchanged more regularly:

  • I find some documentation, then show it to you, and now it’s yours to make use of.

  • I am working through a process, get stuck, and need your skills to finish it up. Now it’s yours. But you might hand it back when you unstick me.

  • You are working through something, but are not permitted to complete the operation, you have to hand it over to me for me to complete the last step.

Layered on this we have the normal notions of ownership and control — the login accounts and permissions of the applications we are using. Whether these are in opposition to cooperation or maybe complementary I have not decided.

Screensharing vs. Peer-to-Peer

Perhaps a technical aside, but when dealing with real-time collaboration (not asynchronous) there are two distinct approaches.

Screensharing means one person (and one computer) is “running” the session — that one person is logged in, their page or app is “live”, everyone else sees what they see.

Screensharing doesn’t mean other people can’t interact with the screen, but any interaction has to go through the owner’s computer. In the case of a web page we can share the DOM (the current visual state of the page) with another person, but we can’t share the Javascript handlers and state, cookies, etc., so most interactions have to go back through the original browser. Any side effects have to make a round trip. Latency is a problem.

It’s hard to figure out exactly what interactivity to implement in a screensharing situation. Doing a view-only interaction is not too hard. There are a few things you can add after that — maybe you let someone touch a form control, suggest that you follow a link, send clicks across the wire — but there’s no clear line to stop at. Worse, there’s no clear line to express. You can implement certain mechanisms (like a click), but these don’t always map to what the user thinks they are doing — something like a drag might involve a mousedown/mousemove/mouseup event, or it might be implemented directly as dragging. Implementing one of those interactions is a lot easier than the other, but the distinction means nothing to the user.

When you implement incomplete interactions you are setting up a situation where a person can do something in the original application that viewers can’t do, even though it looks like the real live application. An uncanny valley of collaboration.

I’ve experimented with DOM-based screen sharing in Browser Mirror, and you can see this approach in a tool like Surfly. As I write this a minimal version of this is available in Hotdish.

In peer-to-peer collaboration both people are viewing their own version of the live page. Everything works exactly like in the non-collaborative environment. Both people are logged in as themselves. This is the model TogetherJS uses, and is also present as a separate mode in Hotdish.

This has a lot of obvious advantages over the problems identified above for screensharing. The big disadvantage is that hardly anything is collaborative by default in this model.

In the context of the web the building blocks we do have are:

  • URLs. Insofar as a URL defines the exact interface you look at, then putting both people at the same URL gives a consistent experience. This works great for applications that use lots of server-side logic. Amazon is pretty great, for example, or Wikipedia. It falls down when content is substantially customized for each person, like the Facebook frontpage or a flight search result.

  • Event echoing: events aren’t based on any internal logic of the program, they are something initiated by the user. So if the user can do something, a remote user can do something. Form fields are the best example of this, as there’s a clear protocol for doing form changes (change the value, fire a change event).

But we don’t have:

  • Consistent event results: events aren’t state changes, and transferring events about doesn’t necessarily lead to a consistent experience. Consider the modest toggle control, where a click on the toggler element shows or hides some other element. If our hidden states are out of sync (e.g., my toggleable element is hidden, yours is shown), sending the click event between the clients keeps them consistently and perfectly out of sync.

  • Consistent underlying object models. In a single-page app of some sort, or a whatever fancy Javascript-driven webapp, a lot of what we see is based on Javascript state and models that are not necessarily consistent across peers. This is in contrast to old-school server-side apps, where there’s a good chance the URL contains enough information to keep everything consistent, and ultimately the “state” is held on a single server or database that both peers are connecting to. But we can’t sync the client’s object models, as they are not built to support arbitrary modification from the outside. Apps that use a real-time database work well.

To make this work the application usually has to support peer-to-peer collaboration to some degree. A messy approach can help, but can never be enough, not complete enough, not robust enough.

So peer-to-peer collaboration offers potentially more powerful and flexible kinds of collaboration, but only with work on the part of each application. We can try to make it as easy as possible, and maybe integrate with tools or libraries that support the kinds of higher-level synchronization we would want, but it’s never reliably easy.

Synchronized vs. Coordinated Experiences

Another question: what kind of experiences do we want to create?

The most obvious real-time experience is: everything sees the same thing. Everything is fully synchronized. In the screensharing model this is what you always get and what you have to get.

The obvious experience is probably a good starting point, but shouldn’t be the end of our thinking.

The trivial example here is the cursor point. We can both be editing content and viewing each other’s edits (close to full sync), but we don’t have to be at exactly the same place. (This is something traditional screensharing has a hard time with, as you are sharing a screen of pixels instead of a DOM.)

But other more subtle examples exist. Maybe only one person has the permission to save a change. A collaboration-aware application might allow both people to edit, while still only allowing one person to save. (Currently editors will usually be denied to people who don’t have permission to save.)

I think there’s fruit in playing with the timing of actions. We don’t have to replay remote actions exactly how they occurred. For example, in a Demonstration context we might detect that when the driver clicks a link the page will change. To the person doing the click the order of events is: find the link, focus attention on the link, move cursor to the link, click. To the viewer the order of events is: cursor moves, maybe a short click indicator, and boom you are at a new page. There’s much less context given to the viewer. But we don’t have to display those events with the original timing for instance we could let the mouse hover over its target for a more extended amount of time on the viewer.

High-level (application-specific) representation of actions could be available. Instead of trying to express what the other person is doing through every click and scroll and twiddling of a form, you might just say “Bob created a new calendar event”.

In the context of something like a bug tracker, you might not want to synchronize the comment field. Instead you might want to show individual fields for all participants on a page/bug. Then I can see the other person’s in-progress comment, even add to it, but I can also compose my own comment as myself.

This is where the peer-to-peer model has advantages, as it will (by necessity) keep the application in the loop. It does not demand that collaboration take one form, but it gives the application an environment in which to build a domain-specific form of collaboration.

We can imagine moving from screenshare to peer-to-peer through a series of enhancements. The first might be: let applications opt-in to peer-to-peer collaboration, or implement a kind of transparent-to-the-application screensharing, and from there tweak. Maybe you indicate some scripts should run on the viewer’s side, and some compound UI components can be manipulated. I can imagine with a component system like Brick where you could identify safe ways to run rich components, avoiding latency.

How do you package all this?

Given tools and interactions, what is the actual context for collaboration?

TogetherJS has a model of a persistent session, and you invite people to that session. Only for technical reasons the session is bound to a specific domain, but not a specific page.

In Hotdish we’ve used a group approach: you join a group, and your work clearly happens in the group context or not.

One of the interesting things I’ve noticed when getting feedback about TogetherJS is that people are most interested in controlling and adding to how the sessions are setup. While, as an implementor, I find myself drawn to the tooling and specific experiences of collaboration, there’s just as much value in allowing new and interesting groupings of people. Ways to introduce people, ways to start and end collaboration, ways to connect to people by role instead of identity, and so on.

Should this collaboration be a conversation or an environment? When it is a conversation you lead off with the introduction, the “hello” the “so why did you call?” and finish with “talk to you later” — when it is an environment you enter the environment and any coparticipants are just there, you don’t preestablish any specific reason to collaborate.

And in conclusion…

I’m still developing these ideas. And for each idea the real test is if we can create a useful experience. For instance, I’m pretty sure there’s some ambient information we want to show, but I haven’t figured out what.

Experience has shown that simple history (as in an activity stream) seems too noisy. And is history shown by group or person?

In the past I unintentionally exposed all tab focus and unfocus in TogetherJS, and it felt weird to both expose my own distracted state and my collaborator’s distraction. But part of why it was weird was that in some cases it was simply distraction, but in other cases it was useful multitasking (like researching a question in another tab). Was tab focus too much information or too little?

I am still in the process of figuring out how and where I can explore these questions, build the next thing, and the next thing after that — the tooling I envision doesn’t feel impossibly far away, but still more than one iteration of work yet to be done, maybe many more than one but I can only see to the next peak.

Who else is thinking about these things? And thinking about how to build these things? If you are, or you know someone who is, please get in contact — I’m eager to talk specifics with people who have been thinking about it too, but I’m not sure how to find these people.

by Ian Bicking at March 03, 2014 06:40 PM

Greg Linden

More quick links

More of what caught my attention recently:
  • Cool new tech, especially for mobile, detecting gesture movements from the changes they make to ambient wireless signals, uses a fraction of the power of other techniques ([1] [2] [3])

  • Also for mobile: "The big trick here is ... two [camera] lenses with two different focal lengths. One lens is wide-angle, while the other is at 3x zoom ... magnify more distant subjects ... improved low-light performance ... noise is reduced ... just as we would if we had one big imaging sensor instead of two little ones ... [and] depth analysis allows ... [auto] blurring out of backgrounds in portrait shots, quicker autofocus, and augmented reality." ([1])

  • "These are not the first artificial muscles to have been created, but they are among the first that are inexpensive and store large amounts of energy" ([1])

  • "Tesla is a glimpse into a future where cars and computers coexist in seamless harmony" ([1])

  • "Fields from anthropology to zoology are becoming information fields. Those who can bend the power of the computer to their will – computational thinking but computer science in greater depth – will be positioned for greater success than those who can’t." ([1] [2])

  • The CEOs of Amazon, Facebook, Google, Microsoft, Twitter, Netflix, and Yahoo have CS degrees

  • Details on fixing What's so impressive is how much they changed the culture in such a short time, from a hierarchical structure where no one would take any responsibility to an egalitarian one where everyone was focused on solving problems. ([1])

  • Clever idea, advertise to find experts on the Web and then get them to answer questions for free by enticing them into playing a little quiz game ([1] [2])

  • "A key to Google’s epic success was the discipline the company maintained around its hiring ... During his first seven years, the executive team met every week to review every single hiring candidate." ([1] [2])

  • "Peter Norvig, Google's research director, said recently that the company employs 'less than 50% but certainly more than 5%' of the world's leading experts on machine learning" ([1])

  • Yahoo is trying to rebuild its research group, which was destroyed by its previous CEO ([1] [2] [3] [4] [5] [6])

  • Software increasingly needs to be aware of its power consumption, the cost of power, and the availability of power, and be able to reduce its power consumption when necessary ([1] [2])

  • "Viewers with a buffer-free experience watch 226% more and viewers receiving better picture quality watch 25% longer" ([1])

  • Gaming the most popular lists in the app stores: "Total estimated cost to reach the top ten list: $96,000" ([1] [2])

  • "The Rapiscan 522 B x-ray system used to scan carry-on baggage in airports worldwide ... runs on the outdated Windows 98 operating system, stores user credentials in plain text, and includes a feature called Threat Image Projection used to train screeners by injecting .bmp images of contraband ... [that] could allow a bad guy to project phony images on the X-ray display." ([1])

  • "It would appear that a surprising number of people use webcam conversations to show intimate parts of their body to the other person." ([1])

  • "Ohhh there's not another cable company, is there? Oh that's right we're the only one in town." ([1])

  • It "sounds like it's straight out of a sci-fi horror flick: they thawed some 30,000-year-old permafrost and allowed any viruses present to infect some cells" ([1])

  • Very funny if you (or your kids) are a fan of Portal, educational too, and done by NASA ([1])

  • NPR's "Wait Wait" did a segment on Amazon's "Customers who bought this", very funny ([1])

by Greg Linden ( at March 03, 2014 03:47 PM

February 26, 2014

Blue Sky on Mars

Only 90s Web Developers Remember This

Please sign my guestbook.

Have you ever shoved a <blink> into a <marquee> tag? Pixar gets all the accolades today, but in the 90s this was a serious feat of computer animation. By combining these two tags, you were a trailblazer. A person capable of great innovation. A human being that all other human beings could aspire to.

You were a web developer in the 1990s.

With that status, you knew you were hot shit. And you brought with you a score of the most fearsome technological innovations, the likes of which we haven't come close to replicating ever since.

Put down the jQuery, step away from the non-relational database: we have more important things to talk about.


1x1.gif should have won a fucking Grammy. Or a Pulitzer. Or Most Improved, Third Grade Gym Class or something. It's the most important achievement in computer science since the linked list. It's not the future we deserved, but it's the future we needed (until the box model fucked it all up).

If you're not familiar with the humble 1x1.gif trick, here it is:

Can't see it? Here, enhance:

The 1x1.gif — or spacer.gif, or transparent.gif — is just a one pixel by one pixel transparent GIF. Just like the most futuristic CSS framework of today but in a billionth of the file size, 1x1.gif is fully optimized for the responsive web. You had to use these advanced attributes to tap into its power, though:

<IMG SRC="/1x1.gif" WIDTH=150 HEIGHT=250>

By doing this you can position elements ANYWHERE ON THE PAGE. Combine this with semantically-appropriate containers and you could do amazing things:

    <TD><IMG SRC="1x1.gif" WIDTH=300>
    <TD><FONT SIZE=42>Hello welcome to my <MARQUEE>Internet Web Home</MARQUEE></FONT>
    <TD BGCOLOR=RED><IMG SRC="/cgi/webcounter.cgi">

1x1.gif let you push elements all around the page effortlessly. To this day it is the only way to vertically center elements.


Are images too advanced for you? HTML For Dummies doesn't cover the <IMG> tag until chapter four? Well, you're in luck: the &nbsp; tag is here!

You may be saying to yourself, "Self, I know all about HTML entity encoding. What is this dastardly handsome man going on about?"

The answer, dear reasonably attractive reader, is an innovation that youth of today don't respect nearly enough: the stacked &nbsp;. Much like the 1x1.gif trick, you can just arbitrarily scale &nbsp; for whatever needs you may face:

&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;MY GUESTBOOK BELOW:

If I had a nickel for how many times I wrote &nbsp; in the 90s, I'd have enough money to cover the monthly overage bills from AOL.

Dotted underlines, border effects

Towards the end of the golden era of HTML, CSS appeared on the scene, promising a world of separating content from style, and we've been dealing with that disaster ever since.

The absolute first thing we did with CSS was use it to stop underlining links. Overnight, the entire internet converted into this sludge of a medium where text looked like links and links looked like text. You had no idea where to click, but hell that didn't really matter anyway because we had developed cursor effects (you haven't lived until your mouse had a trail of twelve fireballs behind it).

This was such a compelling use of advanced technology that it was literally all we used CSS for initially. I even have proof from an index.shtml (fuck yes SSI) file from 2000:

<style type="text/css">
a:hover {text-decoration: none; color: #000000}

That's it. That's the entire — inline, of course — CSS for this file. Make sure when you hover the link, remove the underline and paint it black. From this, entire interactive websites are born.


As soon as we had the technology to remove underlines from links, we decided to combine it with the power to show alert("Welcome to my website!") messages on page load. CSS and JavaScript joined forces to form the Technology of Terror: DHTML.

DHTML, which stands for "distributed HTML", was the final feather in our cap of web development tools. It would stand the test of time, ensuring that we could make snowflakes fall from the top of the page, or build an accordion menu animated image map, or building your own custom <marquee> except using semantic tags like <div>.

DHTML helped transition web development from a hobbyist pastime into a full-fledged profession. Sites like Dynamic Drive meant that instead of thinking through creative solutions for problems you face, you could just copy and paste this 50 line block of code and everything would be fixed. In effect, DHTML was the Twitter Bootstrap of the time.

Pixel fonts

Computer screens were not large. I mean, they were large, since CRT was the shit, but they didn't have a high resolution. Therefore, the best way to leverage those pixels is to write everything in tiny six-point font.

Along those lines, web developers aspired to become illustrators when they looked at these simplistic typefaces and realized they were made up of pixels. You started to see these weird attempts at isometric pixel illustration on splash screens, made by developers whose time and money was probably better spent investing in a .com IPO rather than installing Photoshop.


It's come to my attention that people today don't like Internet Explorer. I can only believe they hate Internet Explorer because it has devolved from its purest form, Internet Explorer 4.0.

Internet Explorer 4.0 was perfection incarnate in a browser. It had Active Desktop. It had Channels. It had motherfucking Channels, the coolest technology that never reached market adoption ever not even a little bit. IE4, in general, was so good that you were going to have it installed on your PC whether you liked it or not.

When you're part of an elite group of people who fully understand the weight of perfection, there is a natural tendency to tell everyone you meet that you and you alone have the gravitas necessary to make these hard decisions. Decisions like what browser your visitors should use.

So we proudly displayed dozens of 88x31 pixel buttons on our sites:

These were everywhere. It's kind of like the ribbons displayed on a uniform of a military officer: they told the tale of all the battles the individual had fought in order to get to where they were today. In other words, which editor (FrontPage '98, obviously), which web server (GeoCities, you moron), and which web ring you were a part of (whichever listed your site highest, which was none of them).

I miss the good ol' days. Today we have abstractions on top of abstractions on top of JavaScript, of all things. Shit doesn't even know how to calculate math correctly. It's amazing we ever got to where we are today, when you think about it.

So raise a glass proudly, and do us all a favor: paste a shit ton of &nbsp;s into your next pull request, just to fuck with your team a little bit.

February 26, 2014 12:00 AM

February 24, 2014

Decyphering Glyph


The Oak and the Reed by Achille Michallon

… that which is hard and stiff
is the follower of death
that which is soft and yielding
is the follower of life …

– the Tao Te Ching, chapter 76

Problem: Threads Are Bad

As we know, threads are a bad idea, (for most purposes). Threads make local reasoning difficult, and local reasoning is perhaps the most important thing in software development.

With the word “threads”, I am referring to shared-state multithreading, despite the fact that there are languages, like Erlang and Haskell which refer to concurrent processes – those which do not implicitly share state, and require explicit coordination – as “threads”.

My experience is mainly (although not exclusively) with Python but the ideas presented here should generalize to most languages which have global shared mutable state by default, which is to say, quite a lot of them: C (including Original Recipe, Sharp, Extra Crispy, Objective, and Plus Plus), JavaScript, Java, Scheme, Ruby, and PHP, just to name a few.

With the phrase “local reasoning”, I’m referring to the ability to understand the behavior (and thereby, the correctness) of a routine by examining the routine itself rather than examining the entire system.

When you’re looking at a routine that manipulates some state, in a single-tasking, nonconcurrent system, you only have to imagine the state at the beginning of the routine, and the state at the end of the routine. To imagine the different states, you need only to read the routine and imagine executing its instructions in order from top to bottom. This means that the number of instructions you must consider is n, where n is the number of instructions in the routine. By contrast, in a system with arbitrary concurrent execution – one where multiple threads might concurrently execute this routine with the same state – you have to read the method in every possible order, making the complexity nn.

Therefore it is – literally – exponentially more difficult to reason about a routine that may be executed from an arbitrary number of threads concurrently. Instead, you need to consider every possible caller across your program, understanding what threads they might be invoked from, or what state they might share. If you’re writing a library desgined to be thread-safe, then you must place some of the burden of this understanding on your caller.

The importance of local reasoning really cannot be overstated. Computer programs are, at least for the time being, constructed by human beings who are thinking thoughts. Correct computer programs are constructed by human beings who can simultaneously think thoughts about all the interactions that the portion of the system they’re developing will have with other portions.

A human being can only think about seven things at once, plus or minus two. Therefore, although we may develop software systems that contain thousands, millions, or billions of components over time, we must be able to make changes to that system while only holding in mind an average of seven things. Really bad systems will make us concentrate on nine things and we will only be able to correctly change them when we’re at our absolute best. Really good systems will require us to concentrate on only five, and we might be able to write correct code for them even when we’re tired.

Aside: “Oh Come On They’re Not That Bad”

Those of you who actually use threads to write real software are probably objecting at this point. “Nobody would actually try to write free-threading code like this,” I can hear you complain, “Of course we’d use a lock or a queue to introduce some critical sections if we’re manipulating state.”

Mutexes can help mitigate this combinatorial explosion, but they can’t eliminate it, and they come with their own cost; you need to develop strategies to ensure consistent ordering of their acquisition. Mutexes should really be used to build queues, and to avoid deadlocks those queues should be non-blocking but eventually a system which communicates exclusively through non-blocking queues effectively becomes a set of communicating event loops, and its problems revert to those of an event-driven system; it doesn’t look like regular programming with threads any more.

But even if you build such a system, if you’re using a language like Python (or the ones detailed above) where modules, classes, and methods are all globally shared, mutable state, it’s always possible to make an error that will affect the behavior of your whole program without even realizing that you’re interacting with state at all. You have to have a level of vigilance bordering on paranoia just to make sure that your conventions around where state can be manipulated and by whom are honored, because when such an interaction causes a bug it’s nearly impossible to tell where it came from.

Of course, threads are just one source of inscrutable, brain-bending bugs, and quite often you can make workable assumptions that preclude you from actually having to square the complexity of every single routine that you touch; for one thing, many computations don’t require manipulating state at all, and you can (and must) ignore lots of things that can happen on every line of code anyway. (If you think not, when was the last time you audited your code base for correct behavior in the face of memory allocation failures?) So, in a sense, it’s possible to write real systems with threads that perform more or less correctly for the same reasons it’s possible to write any software approximating correctness at all; we all need a little strength of will and faith in our holy cause sometimes.

Nevertheless I still think it’s a bad idea to make things harder for ourselves if we can avoid it.

Solution: Don’t Use Threads

So now I’ve convinced you that if you’re programming in Python (or one of its moral equivalents with respect to concurrency and state) you shouldn’t use threads. Great. What are you going to do instead?

There’s a lot of debate over the best way to do “asynchronous” programming - that is to say, “not threads”, four options are often presented.

  1. Straight callbacks: Twisted’s IProtocol, JavaScript’s on<foo> idiom, where you give a callback to something which will call it later and then return control to something (usually a main loop) which will execute those callbacks,
  2. “Managed” callbacks, or Futures: Twisted’s Deferred, JavaScript’s Promises/A[+], E’s Promises, where you create a dedicated result-that-will-be-available-in-the-future object and return it for the caller to add callbacks to,
  3. Explicit coroutines: Twisted’s @inlineCallbacks, Tulip’s yield from coroutines, C#’s async/await, where you have a syntactic feature that explicitly suspends the current routine,
  4. and finally, implicit coroutines: Java’s “green threads”, Twisted’s Corotwine, eventlet, gevent, where any function may switch the entire stack of the current thread of control by calling a function which suspends it.

One of these things is not like the others; one of these things just doesn’t belong.

Don’t Use Those Threads Either

Options 1-3 are all ways of representing the cooperative transfer of control within a stateful system. They are a semantic improvement over threads. Callbacks, Futures, and Yield-based coroutines all allow for local reasoning about concurrent operations.

So why does option 4 even show up in this list?

Unfortunately, “asynchronous” systems have often been evangelized by emphasizing a somewhat dubious optimization which allows for a higher level of I/O-bound concurrency than with preemptive threads, rather than the problems with threading as a programming model that I’ve explained above. By characterizing “asynchronousness” in this way, it makes sense to lump all 4 choices together.

I’ve been guilty of this myself, especially in years past: saying that a system using Twisted is more efficient than one using an alternative approach using threads. In many cases that’s been true, but:

  1. the situation is almost always more complicated than that, when it comes to performance,
  2. “context switching” is rarely a bottleneck in real-world programs, and
  3. it’s a bit of a distraction from the much bigger advantage of event-driven programming, which is simply that it’s easier to write programs at scale, in both senses (that is, programs containing lots of code as well as programs which have many concurrent users).

A system that presents “implicit coroutines” – those which may transfer control to another concurrent task at any layer of the stack without any syntactic indication that this may happen – are simply the dubious optimization by itself.

Despite the fact that implicit coroutines masquerade under many different names, many of which don’t include the word “thread” – for example, “greenlets”, “coroutines”, “fibers”, “tasks” – green or lightweight threads are indeed threads, in that they present these same problems. In the long run, when you build a system that relies upon them, you eventually have all the pitfalls and dangers of full-blown preemptive threads. Which, as shown above, are bad.

When you look at the implementation of a potentially concurrent routine written using callbacks or yielding coroutines, you can visually see exactly where it might yield control, either to other routines, or perhaps even re-enter the same routine concurrently. If you are using callbacks – managed or otherwise – you will see a return statement, or the termination of a routine, which allows execution of the main loop to potentially continue. If you’re using explicit coroutines, you’ll see a yield (or await) statement which suspends the coroutine. Because you can see these indications of potential concurrency, they’re outside of your mind, in your text editor, and you don’t need to actively remember them as you’re working on them.

You can think of these explicit yield-points as places where your program may gracefully bend to the needs of concurrent inputs. Crumple zones, or relief valves, for your logic, if you will: a single point where you have to consider the implications of a transfer of control to other parts of your program, rather than a rigid routine which might transfer (break) at any point beyond your control.

Like crumple zones, you shouldn’t have too many of them, or they lose their effectiveness. A long routine which has an explicit yield point before every single instruction requires just as much out-of-order reasoning, and is therefore just as error-prone as one which has none, but might context switch before any instruction anyway. The advantage of having to actually insert the yield point explicitly is that at least you can see when a routine has this problem, and start to clean up and consolidate the mangement of its concurrency.

But this is all pretty abstract; let me give you a specific practical example, and a small theoretical demonstration.

The Buggiest Bug

Brass Cockroach - Image Credit GlamourGirlBeads

When we wrote the very first version of Twisted Reality in Python, the version we had previously written in Java was already using green threads; at the time, the JVM didn’t have any other kind of threads. The advantage to the new networking layer that we developed was not some massive leap forward in performance (the software in question was a multiplayer text adventure, which at the absolute height of its popularity might have been played by 30 people simultaneously) but rather the dramatic reduction in the number and severity of horrible, un-traceable concurrency bugs. One, in particular, involved a brass, mechanical cockroach which would crawl around on a timer, leaping out of a player’s hands if it was in their inventory, moving between rooms if not. In the multithreaded version, the cockroach would leap out of your hands but then also still stay in your hands. As the cockroach moved between rooms it would create shadow copies of itself, slowly but inexorably creating a cockroach apocalypse as tens of thousands of pointers to the cockroach, each somehow acquiring their own timer, scuttled their way into every player’s inventory dozens of times.

Given that the feeling that this particular narrative feature was supposed to inspire was eccentric whimsy and not existential terror, the non-determinism introduced by threads was a serious problem. Our hope for the even-driven re-write was simply that we’d be able to diagnose the bug by single-stepping through a debugger; instead, the bug simply disappeared. (Echoes of this persist, in that you may rarely hear a particularly grizzled Twisted old-timer refer to a particularly intractable bug as a “brass cockroach”.)

The original source of the bug was so completely intractable that the only workable solution was to re-write the entire system from scratch. Months of debugging and testing and experimenting could still reproduce it only intermittently, and several “fixes” (read: random, desperate changes to the code) never resulted in anything.

I’d rather not do that ever again.

Ca(sh|che Coherent) Money

Despite the (I hope) entertaining nature of that anecdote, it still might be somewhat hard to visualize how concurrency results in a bug like that, and the code for that example is far too sprawling to be useful as an explanation. So here's a smaller in vitro example. Take my word for it that the source of the above bug was the result of many, many intersecting examples of the problem described below.

As it happens, this is the same variety of example Guido van Rossum gives when he describes why chose to use explicit coroutines instead of green threads for the upcoming standard library asyncio module, born out of the “tulip” project, so it's happened to more than one person in real life.

Photo Credit: Ennor

Let’s say we have this program:

def transfer(amount, payer, payee, server):
    if not payer.sufficient_funds_for_withdrawl(amount):
        raise InsufficientFunds()
    log("{payer} has sufficient funds.", payer=payer)
    log("{payee} received payment", payee=payee)
    log("{payer} made payment", payer=payer)
    server.update_balances([payer, payee])

(I realize that the ordering of operations is a bit odd in this example, but it makes the point easier to demonstrate, so please bear with me.)

In a world without concurrency, this is of course correct. If you run transfer twice in a row, the balance of both accounts is always correct. But if we were to run transfer with the same two accounts in an arbitrary number of threads simultaneously, it is (obviously, I hope) wrong. One thread could update a payer’s balance below the funds-sufficient threshold after the check to see if they’re sufficient, but before issuing the withdrawl.

So, let’s make it concurrent, in the PEP 3156 style. That update_balances routine looks like it probably has to do some network communication and block, so let’s consider that it is as follows:

def transfer(amount, payer, payee, server):
    if not payer.sufficient_funds_for_withdrawl(amount):
        raise InsufficientFunds()
    log("{payer} has sufficient funds.", payer=payer)
    log("{payee} received payment", payee=payee)
    log("{payer} made payment", payer=payer)
    yield from server.update_balances([payer, payee])

So now we have a trivially concurrent, correct version of this routine, although we did have to update it a little. Regardless of what sufficient_funds_for_withdrawl, deposit and withdrawl do - even if they do network I/O - we know that we aren’t waiting for any of them to complete, so they can’t cause transfer to interfere with itself. For the sake of a brief example here, we’ll have to assume update_balances is a bit magical; for this to work our reads of the payer and payee’s balance must be consistent.

But if we were to use green threads as our “asynchronous” mechanism rather than coroutines and yields, we wouldn’t need to modify the program at all! Isn’t that better? And only update_balances blocks anyway, so isn’t it just as correct?

Sure: for now.

But now let’s make another, subtler code change: our hypothetical operations team has requested that we put all of our log messages into a networked log-gathering system for analysis. A reasonable request, so we alter the implementation of log to write to the network.

Now, what will we have to do to modify the green-threaded version of this code? Nothing! This is usually the point where fans of various green-threading systems will point and jeer, since once the logging system is modified to do its network I/O, you don’t even have to touch the code for the payments system. Separation of concerns! Less pointless busy-work! Looks like the green-threaded system is winning.

Oh well. Since I’m still a fan of explicit concurrency management, let’s do the clearly unnecessary busy-work of updating the ledger code.

def transfer(amount, payer, payee, server):
    if not payer.sufficient_funds_for_withdrawl(amount):
        raise InsufficientFunds()
    yield from log("{payer} has sufficient funds.", payer=payer)
    yield from log("{payee} received payment", payee=payee)
    yield from log("{payer} made payment", payer=payer)
    yield from server.update_balances([payer, payee])

Well okay, at least that wasn’t too hard, if somewhat tedious. Sigh. I guess we can go update all of the ledger’s callers now and update them too…

…wait a second.

In order to update this routine for a non-blocking version of log, we had to type a yield keyword between the sufficient_funds_for_withdrawl check and the withdraw call, between the deposit and the withdraw call, and between the withdraw and update_balances call. If we know a little about concurrency and a little about what this program is doing, we know that every one of those yield froms are a potential problem. If those log calls start to back up and block, a payer may have their account checked for sufficient funds, then funds could be deducted while a log message is going on, leaving them with a negative balance.

If we were in the middle of updating lots of code, we might have blindly added these yield keywords without noticing that mistake. I've certainly done that in the past, too. But just the mechanical act of typing these out is an opportunity to notice that something’s wrong, both now and later. Even if we get all the way through making the changes without realizing the problem, when we notice that balances are off, we can look only (reasoning locally!) at the transfer routine and realize, when we look at it, based on the presence of the yield from keywords, that there is something wrong with the transfer routine itself, regardless of the behavior of any of the things it’s calling.

In the process of making all these obviously broken modifications, another thought might occur to us: do we really need to wait before log messages are transmitted to the logging system before moving on with our application logic? The answer would almost always be “no”. A smart implementation of log could simply queue some outbound messages to the logging system, then discard if too many are buffered, removing any need for its caller to honor backpressure or slow down if the logging system can’t keep up. Consider the way syslog says “and N more” instead of logging certain messages repeatedly. That feature allows it to avoid filling up logs with repeated messages, and decreases the amount of stuff that needs to be buffered if writing the logs to disk is slow.

All the extra work you need to do when you update all the callers of log when you make it asynchronous is therefore a feature. Tedious as it may be, the asynchronousness of an individual function is, in fact, something that all of its callers must be aware of, just as they must be aware of its arguments and its return type.

In fact you are changing its return type: in Twisted, that return type would be Deferred, and in Tulip, that return type is a new flavor of generator. This new return type represents the new semantics that happen when you make a function start having concurrency implications.

Haskell does this as well, by embedding the IO monad in the return type of any function which needs to have side-effects. This is what certain people mean when they say Deferreds are a Monad.

The main difference between lightweight and heavyweight threads is that it is that, with rigorous application of strict principles like “never share any state unnecessarily”, and “always write tests for every routine at every point where it might suspend”, lightweight threads make it at least possible to write a program that will behave deterministically and correctly, assuming you understand it in its entirety. When you find a surprising bug in production, because a routine that is now suspending in a place it wasn’t before, it’s possible with a lightweight threading system to write a deterministic test that will exercise that code path. With heavyweight threads, any line could be the position of a context switch at any time, so it’s just not tractable to write tests for every possible order of execution.

However, with lightweight threads, you still can’t write a test to discover when a new yield point might be causing problems, so you're still always playing catch-up.

Although it’s possible to do this, it remains very challenging. As I described above, in languages like Python, Ruby, JavaScript, and PHP, even the code itself is shared, mutable state. Classes, types, functions, and namespaces are all shared, and all mutable. Libraries like object relational mappers commonly store state on classes.

No Shortcuts

Despite the great deal of badmouthing of threads above, my main purpose in writing this was not to convince you that threads are, in fact, bad. (Hopefully, you were convinced before you started reading this.) What I hope I’ve demonstrated is that if you agree with me that threading has problematic semantics, and is difficult to reason about, then there’s no particular advantage to using microthreads, beyond potentially optimizing your multithreaded code for a very specific I/O bound workload.

There are no shortcuts to making single-tasking code concurrent. It's just a hard problem, and some of that hard problem is reflected in the difficulty of typing a bunch of new concurrency-specific code.

So don’t be fooled: a thread is a thread regardless of its color. If you want your program to be supple and resilient in the face of concurrency, when the storm of concurrency blows, allow it to change. Tell it to yield, just like the reed. Otherwise, just like the steadfast and unchanging oak tree in the storm, your steadfast and unchanging algorithms will break right in half.

by Glyph at February 24, 2014 10:05 AM

February 22, 2014

Paul's Pontifications