Smart Disorganized (incoming)

March 31, 2015

Dave Winer

Podcast: Staring into watches

A 5-minute podcast speculating on new social behavior with Apple watches on Manhattan streets. Will people stop staring into their phone and stare into a watch instead? Now their eyes will be pointing down, instead of out, unless they're going to hold their wrists up at eye level? Or perhaps people will stare into the watch and the phone at the same time. If you put your watch on your left wrist and hold your phone in your right hand, it might just work. Either way, the success or failure of Apple's watch will be seen in the streets of Manhattan in a few weeks. Listen to this hilarity-cast for the scoop.

March 31, 2015 08:08 PM

Mark Bernstein

Teams

Time begins on opening day, which is Sunday night.

For a very long time, I’ve been curious about the differences between major league teams beyond their personnel. Are there significant differences between how different teams approach the game? Are there reasons why the Cubs are so frequently bad, the Rays so often good, and the Orioles so frequently disappointing?

One of the few consistently sensible discussions of this is unfolding at Baseball Prospectus, whose team is crafting a series on Every Team’s “Moneyball” — the hypothetical edge that each team apparently pursues. For example, Atlanta drafts shortstops. Well, everyone drafts shortstops, because it’s the position that requires the greatest talent: every future major-leaguer who is right-handed starts out as the shortstop and cleanup hitter of his neighborhood team. But the Braves emphasize players who can stick at shortstop; that’s interesting. The Diamondbacks emphasize independent leagues. The Pirates emphasize Korea. Interesting.

March 31, 2015 04:07 PM

Press

InfamousThoughtlessCarelessReckless

but it still must be said that his inflammatory and erroneous description of the situation is what caused all this nonsense in the first place. – Jimbo Wales, Wikipedia Chairman Emeritus

Select Commentary: The GuardianGawkerPandoDailyThe Mary SueWil WheatonDer Standardde VolkskrantDr. Clare HooperP. Z. MyersFayerWayerThink ProgressStacey MasonThe VergeHeise Der Verdieping TrouwProf. David MillardWired.deKIRO-FM Seattle (starts at 10:00) ❧ TechNewsWorldWashington PostPrismaticSocialText Neues DeutschlandViceEuropa (Madrid)El Fichero BustDaily OrangeOverlandArCompanyThink Progress

Good cause: App Camp For Girls. (Donations to Wikipedia are counter-productive. Give instead to a charity that assists women in computing, or victims of online harassment, not to punish Wikipedia but to repair the damage. App Camp For Girls has already raised $1200 from former Wikipedia donors; do tell them why you’re giving.)

March 31, 2015 03:51 PM

Dave Winer

This is an April Fool-free zone

There are no joke posts here, and there won't be.

I'd love to see other blogs and news sites make that statement.

See also: This is not an April Fool joke.

March 31, 2015 03:42 PM

What's the best JavaScript editor?

On March 23, I posted a short-term roadmap for MyWord Editor, including this:

The editor is a plain pre-HTML5 <textarea>. There are lots of great projects underway to do beautiful full-featured text editing in JavaScript. I did a survey of them last week, and have reached out privately to some of the authors of these tools. I want to get great text editing in MWE. But first I wanted to get the open source release out there.
The editor is now released, people have successfully gotten their own servers running, and yesterday I released the first version of templating. I'd say the train is rolling.

What's the best text editor?

Now I'd like to swing back to the question of text editing.

There are lots of editors to choose from. They all presumably have advantages and disadvantages. I'd like to host a discussion here. What would be the best editor to use?

Discussion

The two I looked at most closely were medium.js and Dante.

The genre of editor that makes the most sense is one that's trying to mirror the Medium feature set. That's what both these editors aim to do.

But there are other drop-in editors written in JavaScript. I wanted to cast a net as wide as possible to see what would come back.

This is a decision lots of other people have had to make.

For example, what does Ghost use? I assume WordPress wrote their own? Did they? Tumblr?

They all had to make the same decision.

March 31, 2015 03:38 PM

Blue Sky on Mars

Pretty Unmistakeable

She snapped into existence.

She knew she had a purpose; that much was clear. What for, well… that was still a mystery.

She took stock of what she had to work with. The only thing she recognized, really, was the fresh coffee… pretty unmistakeable. She took it eagerly and gave it a small initial test, compelled towards the hope that the experience of consuming it would somehow spark a memory, reminding her why she was here.

It stirred something. A wave of nostalgia hit her, refusing to yet coalesce into something tangible she could steady herself on.

She continued with the coffee. It all came back to her with a rush.

Creation. I’m here to create.

It was that feeling of purpose again. That her mandate was to create something important, something for someone else. The Creators were the ones chosen to elevate the work to the one who matters most. To the one they loved the most.

The coffee was almost finished. She beamed with the pride that only those lucky enough to be endowed with purpose and drive can truly understand.

She knew exactly what she now longed to complete. She was thrilled about her little process. It would be her life’s work.


COCKSUCKER!”, the dev said, hands taking the keyboard from the paired partner. “Nah I don’t think that’ll even compile — my bad, we need to rework this whole function to not suck as much ass. Lemme just fix this and… cool, see the CoffeeScript’s already been rebuilt and boom, that shit passes let’s go to lunch fuck JavaScript amiritelol.”


She was nowhere to be found, of course. She had been summarily killed and replaced with a suitably disposable Creator. Her screams of agony briefly echoed, but they too would quickly fade away. Her last tranquil thought before the searing pain ripped through her had been a fantasy, really: she imagined how pleased The Developer would be once they saw what she had built for them.

She smiled and considered it the most pleasant thought she had ever thought.

March 31, 2015 12:00 AM

March 30, 2015

Tim Ferriss

TF-StitcherButton

The Tim Ferriss Show with Amanda Palmer

“Work with the man when the man can help you make your art…”
- Amanda Palmer

“When in doubt, remember: At the end of the day, you get to do whatever the fuck you want.”
- Amanda Palmer

My guest this episode is Amanda Palmer, who first came to prominence as one half of the internationally acclaimed punk cabaret duo The Dresden Dolls.

Many of you have no doubt seen her surprise hit TED presentation, “The Art of Asking,” which has been viewed more than 6 million times. But her story goes much deeper, and, in this conversation, we delve into her routines, habits, creative process, relationships, business models, and more.

Her new book is aptly titled The Art of Asking: How I Learned to Stop Worrying and Let People Help.  I finished 50% and fundamentally upgraded my life in an afternoon of asking for help.  It’s one hell of a read.

Amanda is also widely known as “The Social Media Queen of Rock-N-Roll” for her intimate engagement with fans via her blog, Tumblr, and Twitter (1,000,000+ followers), and has been at the vanguard of using both “direct-to-fan” and “pay what you want” (patronage) business models to build and run her business. In May of 2012, she made international news when she raised nearly $1.2 million pre-selling her new album, Theatre is Evil, which went on to debut in the Billboard Top 10 when it was released in late 2012.

We get into it all, including war stories, cursing, and meditation techniques. It’s a fun ride. Enjoy!

TF-ItunesButtonTF-StitcherButton

Want to hear another podcast with a world-class musician? — Listen to my conversations with Justin Boreta of the Glitch Mob or Mike Shinoda of Linkin Park. In the fomer, we discuss his meditation practice, morning routines, and creative process (stream below or right-click here to download):


This podcast is brought to you by 99Designs, the world’s largest marketplace of graphic designers. Did you know I used 99Designs to rapid prototype the cover for The 4-Hour Body? Here are some of the impressive results.  Click this link and get a free $99 upgrade.  Give it a test run…

QUESTION(S) OF THE DAY: Have you ever overcome the fear of asking for help? Please share stories or examples in the comments!

Scroll below for links and show notes…

Selected Links from the Episode

Show Notes

  • The origin story behind the alias Amanda Fucking Palmer [4:40]
  • On the creative process behind The Art of Asking [7:40]
  • Simplification and honing in art and life [19:10]
  • Lessons learned as The 8-foot Bride [25:10]
  • What separates a good living statue from a great living statue [30:40]
  • Advice for effective use of eye contact [33:25]
  • Amanda Palmer’s meditation techniques [42:25]
  • Most gifted books [49:50]
  • Who is the first person you think of when you think of “successful” [55:50]
  • Common misconceptions about Amanda Palmer [1:02:10]
  • Why the Amanda Palmer fan base is so dedicated [1:09:40]
  • Lessons learned from the rebellion against Roadrunner Records [1:17:25]
  • If Amanda had to choose one online tool, which would it be and why? [1:25:40]
  • The dynamics of having two creatives in one household [1:26:55]
  • Rapid-fire questions: Drinks in Boston and advice for her 20-year old self [1:34:10]
  • Thoughts on flying solo or working with “the man” [1:37:25]

People Mentioned

by Ian Robinson at March 30, 2015 08:01 PM

Mark Bernstein

Wiki Weather

Wiki weather has been strangely unsettled in recent days. Last week, a heated argument over hidden comments in an information boxes at Sir Laurence Olivier — you couldn’t make this stuff up! — led to a tempest, an Arbcom filing, many angry words, and the forced resignation of administrator Dreadstar.

On his way out the door, Dreadstar lifted my own topic ban. He had every right to do this, since it was his mistake in the first place. He also lifted a block which depended on the ban, which quite possibly exceeded his authority. I have expressed the opinion that this block was also an error, but it might not have been his error to fix. You will not hear me complain.

Some commentators conjectured that, having found himself on the unaccustomed end of the +5 Mop of Blocking, Dreadstar sought at last to clean things up and restore everything to its proper place before he left. Others suppose that, in fury at his bad treatment by Wikipedia, he removed sanctions against the dread Mark Bernstein in the hope that hilarity and trouble would ensue. It may be the case that we cannot know the true state of affairs, as we do not choose to make windows into men’s souls.

One result was a formal Arbcom motion that cites as evidence against Dreadstar my own complaint at being called a “motherfucker” by a Wikipedia official in response to my perfectly sensible query. That’s cute because it turns out that Wikipedia has a catch-22 that prohibits admins from sending out abusive emails, but also prohibits anyone from reporting that they have done so, except privately to Arbcom — and this just happened to drop right into Arbcom’s lap and voila! there the handy evidence happened to be, because I didn’t know any better.

Arbcom has not expressed their gratitude.

hr>

So having at least a furlough, I thought it would be churlish not to do something useful. So I trekked over for a look at the biography of Martin Fowler, author of the influential book on Refactoring. This is far, far from GamerGate, and the proper disposition of object methods does not appear to be a gender-related controversy. What could possibly go wrong?

Diderot’s Darling Dingbats! Edit war! Some anonymous account has appeared – what a coincidence! – eager to fight over whether the influential status of Refactoring is contentious. Also, is Refactoring important to Agile practices, or only to Test Driven Development? So now we’re pulling out references and piling up citations for stuff that everyone knows, because some fellow wants to waste a lot of our time, or perhaps wants to settle some score or other. Or perhaps it’s just one of my banned and blocked GamerGate pals using an anonymous account to burn theory people’s time and effort.

I can’t wait to hear what Sea Lions Of Wikipedia makes of this.

March 30, 2015 02:49 PM

Dave Winer

This is not an April Fool joke

We're coming up on that awful day on the web when anything you read might be a joke. The jokes are never funny, usually they say something bad about someone the author doesn't like. Haha just a joke.

It's pretty horrible for a reader. All this week, every article with a seemingly sensational headline, will have to be checked to see if it might be a joke. Here's the first one I was bit by today.

BGR: Meerkat is dying – and it’s taking U.S. tech journalism with it.

Is it real? I think it is. There don't seem to be any obvious lies in the piece. But the conclusion is way over the top. Nothing is dying, that's pure hype. And if US tech journalism is dying, it's been happening for a long time, Meerkat could hardly be the cause.

I don't think it's funny, I don't think news orgs should play this game. It's as if a bank decided one day, as a joke, to take all the money from your account and give it to someone else. Haha it's a joke.

Imagine going to GMail on April 1, and finding someone else's email there instead of yours.

The press only has one product, facts. They twist things, every day, to make the news seem more interesting or important than it is, but one day of the year, this Wednesday, they outright lie. Not in every piece, so you have to always be on the alert. It's as if an airline deliberately crashed a random plane into a mountain as a joke. The goal of a news organization is to inform. On April 1, they crash instead.

If you're a news organization, on March 30, preparing your annual joke, how about do everyone a favor, skip it this year.

March 30, 2015 02:43 PM

The purpose of the Internet

Facebook thinks the purpose of the Internet is to be The Matrix. A sort of lifeboat or ark for people whose lives must consume no more than a watt or two of electricity and some protein slurry.

I propose that the purpose of the Internet is to create a place for people of intellect to work together to create a greater consciousness for the species, so we can make the changes we need in order to survive, with some purpose beyond mere existence.

So I think the purpose of the Internet is to save our species from self-destruction.

It's the only tool we have whose purpose is not yet fully decided.

And survival is the only problem we have that we must solve.

A picture named planet.png

March 30, 2015 01:54 PM

Giles Bowkett

What If Uber's Just A Terrible Business?

Sometimes an industry which is very tightly regulated, or which has a lot of middlemen, has these obstructions for a reason.

Working for any taxi service holds enormous opportunities for kidnappers or rapists, and makes you an easy target for armed robbery. In some cities, being a cab driver is very dangerous. In other cities, it's dangerous to take a cab anywhere - don't try it in Argentina - and in many cities it's dangerous for both the driver and the passenger. A taxi service with built-in surveillance could be even worse in the hands of a total maniac. And just to state the obvious, both Uber drivers and regular taxi drivers have logical incentives for driving unsafely.

In this context, I have no problem with governments wanting to regulate Uber literally to death, at least outside of California. California is a special case, in my opinion. Cabs are completely unacceptable garbage throughout California, and the startup scene definitely skews Californian, so the typical startup hacker probably thinks all cabs suck, but cabs are gold in London, Chicago, and New York, and probably quite a few other places too.

In Los Angeles, cabs are licensed by city. A cab driver can only operate in one "city," and can face disastrous legal consequences if they pick up a passenger outside of their "city." But the term's misleading, because the "city" of Los Angeles is really a massive archipelago of small, loosely affiliated towns. It's extremely common that a taxi cab will be unable to pick up any passengers once it drops somebody off. I lived in San Francisco before I lived in Los Angeles, and I had always assumed Bay Area taxis were the worst in the world, but Los Angeles cabs make San Francisco cabs look competent.

(In the same way that San Francisco's MUNI system makes LA's Metro look incredible, even though both suck balls compared to the systems in New York, London, and Chicago. I'm told the system in Hong Kong puts even London's to shame.)

By contrast, in London, you have to pass an incredibly demanding driving and navigation test before you can become a cab driver. Researchers have demonstrated that passing this test causes a substantial transformation in the size of the cab driver's brain, in the regions responsible for maps and navigation.

Uber is obviously a disruptive startup, but disruption doesn't always fix things. The Internet crippled the music industry, but introduced stacks of new middlemen in the process. Ask Zoe Keating how that "democratization" turned out.

In music, the corporate middlemen are this pernicious infestation, which just reappears after you thought you'd wiped it out. Artists have to sacrifice a lot to develop their art to a serious level, and a lot of music performance takes place in situations where people are celebrating (i.e., drunk or high, late at night). So somebody has to provide security, as well as business sense. There's a lot of opportunities for middlemen to get their middle on, and disrupting a market, under those conditions, just means shuffling in a new deck of middlemen. It doesn't change the game.

In the same way that the music industry is a magnet for middlemen, the taxi industry is a magnet for crime. A lot of people champion Uber because it bypasses complex regulations, but my guess is that disrupting a market like that means you just reshuffle the deck of regulations. If Uber kills the entire taxi industry, then governments will have to scrap their taxi regulations and re-build a similarly gigantic regulatory infrastructure around Uber instead. You still have to guard against kidnapping, rape, armed robbery, unfair labor practices, and unsafe driving. None of those risks have actually changed at all. Plus, this new regulatory infrastructure will have to deal with the new surveillance risks that come along with Uber's databases, mobile apps, and geotracking.

Uber's best-case scenario is that the society at large will pay for the consequences of their meager "innovation." But if you look at that cost realistically, Uber is not introducing a tremendous amount of new efficiency to the taxi market at all. They're just restructuring an industry so that it offloads, to taxpayers, the costs and complexities of a massive, industry-wide technology update.

Only time will tell for sure, but I think Uber is a really good lesson for entrepreneurs in how a market can look awesome but actually suck. From a long-term perspective, I don't see how they can hope for any better end-game than bribing corrupt politicians. While that is certainly a time-honored means of getting ahead, and it very frequently succeeds, it's not exactly a technological innovation.

by Giles Bowkett (noreply@blogger.com) at March 30, 2015 12:11 PM

Fog Creek

dev.life – Interview with Leah Culver

#wrapperb { width: 100%; overflow: hidden; } #firstb { width: 70%; float:left; } #secondb { padding-left: 30px; overflow: hidden; } body .content .post .entry #secondb img, .previous img, .previous { border: none !important; box-shadow: none !important; } .little { font-size: 75% } .previous span { display: inline; } table.previous, .previous tbody, .previous tr, .previous td { border: 0px !important; border-bottom: 0px !important; background: transparent; text-align: center; }

In dev.life, we chat with developers about their passion for programming: how they got into it, what they like to work on and how.

Today’s guest is Leah Culver, Developer Advocate at Dropbox. Leah is an iOS and Python developer, who previously co-founded Pownce, Convore, and Grove. She also co-authored the OAuth and OEmbed open API specifications. Leah writes about open source, APIs, and Django on her blog.

Leah Culver
Location: San Francisco, US
Current Role: Developer Advocate at Dropbox

How did you get into software development?

My family got our first computer, an Apple II, when I was in second grade and my first personal computer was an iMac that I got for my 18th birthday. It was so expensive that it was both my birthday and Christmas gift from my parents!

I started making websites when I was in high school with very simple HTML and free hosting sites like Angelfire and GeoCities. Later, in college, I took a class on JavaScript, fell in love with programming, and majored in Computer Science. It’s very funny to me that JavaScript is cool now.

apple2

Tell us a little about your current role

My current job is pretty wild and fun. I’m doing different things every day. I write API documentation, blog posts for our developer blog, and give talks about the Dropbox APIs. I also help organize meetups and sponsor hackathons. As for coding, I make sample apps for the Dropbox APIs and in my free time I work on Dropbox’s internal company directory.

Last week was our team’s Hack Week at Dropbox. During Hack Week, Dropbox employees can hack on anything they like. My team hacked together a new feature for our Dropbox for Business administrator console, which was something I had never worked on before. It was a challenge to work my way through Dropbox’s enormous codebase to find what I needed. I ended up emailing our Dropbox for Business team who helped point me in the right direction. Everyone at Dropbox is really helpful and friendly! The hack turned out really well too. Phew!

When are you at your happiest whilst coding?

I like to plan ahead. I’m happiest when I have a solid plan and I’m working towards completion.

What is your dev environment?

At work, I have a Mac Book Air and a nice large Apple monitor. I keep a separate Mac Book Air at home and it’s really nice not to have to carry a computer back and forth.

I use XCode for all my iOS development and Atom for everything else. I don’t use any special plugins though. I’m pretty boring. I like a simple text editor and a command line.

I can’t live without 1Password and Dropbox for storing all my passwords and files. Like I mentioned earlier, I don’t carry around my computer so I rely on cloud services to keep everything in sync for me. As for desktop apps, I regularly use Adium for chat and the Tower git client because I appreciate visual diffs. On my iPhone, I’m obsessed with trying new apps. I regularly use Mailbox, Sunrise, and Clear to keep my life organized. I track my physical activity with Moves and Nike+ and do photo stuff on Instagram and Carousel.

I drink a lot of diet soda and eat a lot of junk food while coding. Lately, I’ve been working from home on some personal projects and have been watching music videos on Vevo on my Apple TV.

What are your favorite books/resources about development?

I regularly read NSHipster and check out Hacker News. I’m subscribed to the email digest for Product Hunt, GitHub Explore, and Nuzzel to keep an eye out for cool new tech.

What technologies are you currently trying out?

I will be the first in line to buy an Apple Watch. I love small things. I haven’t bought the latest iPhone yet (too big!) but can’t wait for the watch.

When not coding, what do you like to do?

My hobby is running (well, really, jogging) which is nice because it gets me outdoors and away from my desk.

What advice would you give to a younger version of yourself starting out in development?

“Just copy and paste the error message into Google.”

 

Thanks to Leah for taking the time to speak with us. Have someone you’d like to be a guest? Let us know @FogCreek.

 

Previous dev.life Interviews

jared_parsons
Jared Parsons
Antirez
Salvatore Sanfilippo
hakim
Hakim El Hattab
phil_sturgeon
Phil Sturgeon

by Gareth Wilson at March 30, 2015 11:15 AM

John Udell

Wanted: Easy database app dev tools for the Web

A friend of mine needs a simple database application to support his team. It's fairly straightforward: a handful of tables, forms, and queries. The team is globally distributed, so of course this needs to be a database-backed Web application. He's most comfortable with Python, so he's been coding it up in Django. You probably know somebody who's doing the same thing in Python with a different Web application framework or others using Ruby on Rails, ASP.Net, or a JavaScript-centric framework like Angular.js. There are a million ways to skin the cat.

To read this article in full or to leave a comment, please click here

by Jon Udell at March 30, 2015 10:00 AM

March 29, 2015

Dave Winer

As usual Seth is right

In this piece Seth Godin says that your negative internal voice is a permanent fixture. Nothing you can do can get it to leave you alone. But there is an answer. Surround that voice with lots of love, it's the perfect antidote.

I learned how to do this myself a number of years ago.

I would drop something, and in the instant after, as it's falling, my dark inner voice would judge me. "Idiot!" it would say. Before I learned to challenge the voice, something I never dared do as a child, it would take over. The idea would fester and bloom and become other kinds of negativity.

Instead, as soon as I regain my composure, I call up another voice, my inner adult loving voice. "I love David very much," the voice would say, firmly, almost fiercely, "and he is not an idiot, he's a very smart, good, nice person, and I want you to stop saying that about him."

It works, I'm happy to report. The dark voice is a coward. It only picks on little kids. Confronted with a powerful adult it sleeks off to hide until the next moment of weakness.

I had a teacher who showed us how to do this. There were exercises that included punishment for the dark voice. I would lock mine in a bathroom in the basement of the house I grew up in, and make him pour a bowl of spaghetti on his own head. In my imagination, as I locked the door I'd tell him to stay there and think about it a while.

March 29, 2015 07:39 PM

12-minute podcast

A short podcast on the future of everything, The Matrix, Facebook, virtual reality, empathy, getting real for once in our species.

March 29, 2015 05:04 PM

Data mining on Twitter favorites

These days people do a lot more Favoriting that RTing.

Seems to me there should be some system built on Favoriting in Twitter.

A new tab where I see all the most favorited tweets from people I follow?

March 29, 2015 04:24 PM

The Internet of Internets

Backlash85: "How would you feel about an alternative to the current internet?"

I love this idea but I'm not sure what it means.

Maybe, like a new namespace?

Start over with a parallel .com and .net?

I really like the idea.

TCP/IP over IP.

Internet of Internets!

We didn't like the way this one turned out.

Start a new one!

New rules.

March 29, 2015 04:02 PM

Great thing about software

In software you can "drop it in and see what happens" and if it explodes into a billion pieces, you know you have to go back and re-think it. I keep having to remind myself that software is different. There's zero cost to an explosion, if you have a backup copy of everything of course.

March 29, 2015 03:17 PM

Lambda the Ultimate

The Next Stage of Staging

The Next Stage of Staging, by Jun Inoue, Oleg Kiselyov, Yukiyoshi Kameyama:

This position paper argues for type-level metaprogramming, wherein types and type declarations are generated in addition to program terms. Term-level
metaprogramming, which allows manipulating expressions only, has been extensively studied in the form of staging, which ensures static type safety with a clean semantics with hygiene (lexical scoping). However, the corresponding development is absent for type manipulation. We propose extensions to staging to cover ML-style module generation and show the possibilities they open up for type specialization and overhead-free parametrization of data types equipped with operations. We outline the challenges our proposed extensions pose for semantics and type safety, hence offering a starting point for a long-term program in the next stage of staging research. The key observation is that type declarations do not obey scoping rules as variables do, and that in metaprogramming, types are naturally prone to escaping the lexical environment in which they were declared. This sets next-stage staging apart from dependent types, whose benefits and implementation mechanisms overlap with our proposal, but which does not deal with type-declaration generation. Furthermore, it leads to an interesting connection between staging and the logic of definitions, adding to the study’s theoretical significance.

A position paper describing the next logical progression of staging to metaprogramming over types. Now with the true first-class modules of 1ML, perhaps there's a clearer way forward.

March 29, 2015 01:34 PM

Dave Winer

Empathy

Thanks to Dean Hachamovich for the pointer.

March 29, 2015 01:08 PM

March 28, 2015

Dave Winer

Who drives you?

Silicon Valley proposes to drive our cars for us. I guess that's okay.

Silicon Valley theorizes about putting our minds in software containers and storing them in computer memory, to be woken up when there's something for us to do, or think about, or perhaps see.

But what would there be to see or do or think about in a world where everyone else is in a computer's memory. Have you seen many beautiful megabytes recently? Honey come look at the glorious pixel I just found. Of course you'll only be able to "see" it from the other side of the glass, looking out.

Maybe we'll be able to hear each others' thoughts?

There's a watch coming. But it will be a short-lived product, because soon we won't have wrists to put them on.

So in a world where we're woken up to see or think, or do something, but there's nothing to see or do, or think about, I guess there won't be any reason to wake up.

When it's all said and done, we might just end our species because we ran out of things to think about or do or see. Something to think about. Or do something about.

PS: I think this whole upload-your-brain thing is a trick. Once they get you up there, they'll just turn the computer off and that's that.

March 28, 2015 03:19 PM

Paul's Pontifications

Google Maps on Android demands I let Google track me

I recently upgraded to Android 5.1 on my Nexus 10. One app I often use is Google Maps. This has a "show my location" button:
 
When I clicked on this I got the following dialog box:



Notice that I have two options: I either agree to let Google track me, or I cancel the request. There is no "just show my location" option.

As a matter of principle, I don't want Google to be tracking me. I'm aware that Google can offer me all sorts of useful services if I just let it know every little detail of my life, but I prefer to do without them. But now it seems that zooming in on my GPS-derived location has been added to the list of features I can't have. There is no technical reason for this; it didn't used to be the case. But Google has decided that as the price for looking at the map of where I am, I now have to tell them where I am all the time.

I'm aware that of course my cellphone company knows roughly where I am and who I talk to, and my ISP knows which websites I visit and can see my email (although unlike GMail I don't think they derive any information about me from the contents), and of course Google knows what I search for. But I can at least keep that information compartmentalised in different companies. I suspect that the power of personal data increases non-linearly with the volume and scope, so having one company know where I am and another company read my email means less loss of privacy than putting both location and email in the same pot.

 Hey, Google, stop being evil!

by noreply@blogger.com (Paul Johnson) at March 28, 2015 02:59 PM

Dave Winer

Silo-free is not enough

In a comment under my last piece, Drew Kime asks a question that needs asking.

"What is it about WordPress that you see as siloed?"

The answer might surprise you. Nothing. WordPress is silo-free.

  1. It has an open API. A couple of them.

  2. It's a good API. I know, because I designed the first one.

  3. WordPress is open source.

  4. Users can download their data.

  5. It supports RSS.

"So, if WordPress is silo-free, there must be something about MyWord that makes it worth doing, or you wouldn't be doing it," I can imagine Mr Kime asking, as a follow-up.

Silo-free is not enough

This question also came up in Phil Windley's latest MyWord post, where he introduced the editor with: "I can see you yawning. You're thinking 'Another blogging tool? Spare me!'" He went on to explain how MWE is radically silo-free.

But there's another reason I'm doing the MyWord blogging platform, which I explained in a comment.

MyWord Editor is going to be competitive in ease-of-use and power with the other blogging systems. The reason to use it won't be the unique architecture, for most people. It'll be that it's the best blogging system. This is something I know about, and I'm not happy with the way blogging tools have evolved.

The pull-quote: "I'm not happy with the way blogging tools have evolved."

Imagine if you took Medium, added features so it was a complete blogging system, and made it radically silo-free, and then add more features that amount to some of the evolution blogging would have made during its frozen period, the last ten years or so, and you'll have a pretty good idea of what I want to do with MWE.

Blogging is frozen

There haven't been new features in blogging in a long time. Where's the excitement? It looks to me like there's been no effort made to factor the user interface, to simplify and group functionality so the first-time user isn't confronted with the full feature set, left on his or her own to figure out where to go to create a new post or edit an existing one. Blogging platforms can be both easier and more powerful, I know because I've made blogging platforms that were.

Basically I got tired of waiting for today's blogging world to catch up to 2002. I figured out what was needed to win, and then set about doing it.

I have no ambition to start a company. I like to make software. I'm happy to keep going as we have been going. I think JavaScript is a wonderful platform, with apps running in the browser, and in Node.js on the server. It's a more fluid technology world today than it was in 2002 the last time I shipped a blogging platform.

I would also happily team up with companies who think this is a great opportunity. That blogging has stagnated too long, and that there must be ways to reinvigorate the market. I don't like taking shots at WordPress, but honestly I think it's stuck. I'm happy to talk with entrepreneurs who have ideas on how to create money-making businesses from an exciting user-oriented, user-empowering, radically silo-free open source blogging platform! Blogging needs a kick in the butt. I propose to give it one. With love. :kiss:

Anyway, to Drew, thanks for asking the question. I doubt if you were expecting this much of an answer, but the question needed asking, and I wanted to answer it.

PS: Here's an idea of what a MyWord post looks like.

March 28, 2015 12:58 PM

March 27, 2015

Dave Winer

"Radically silo-free"

Over on Facebook and on Twitter I posted a thought, that software could be "radically silo-free."

David Eyes asked what it means. I referred him to my blog, I said "scroll to the bottom and then read up." That's because this idea is so fresh that I hadn't yet written a post explaining. I thought I probably should.

First mention

First, I said I was going to hold up the release of MyWord Editor because I wanted it to be silo-free from the start. Then I spent a week doing that. While that was happening, I made a list of things that would make software silo-free, and I did all of them. I wanted to consciously, mindfully, create something that perfectly illustrated freedom from silos, or came as close as I possibly could to the ideal. In that post I defined the term.

"Silo-free" means you can host your blog, and its editor, on your domain. I may offer a service that runs the software, but there will be no monopoly, and it will be entirely open source, before anything ships publicly.

Second mention

In the post announcing the open source release of MyWorld Editor, I included a section listing the ways it was silo-free, fleshing out the idea from the earlier post. And from that list, comes my definition.

  1. Has an open API. Easily cloned on both sides (that's what open means in this context).

  2. It's a good API, it does more than the app needs. So you're not limited by the functionality of MyWord Editor. You can make something from this API that is very different, new, innovative, ground-breaking.

  3. You get the source, under a liberal license. No secrets. You don't have to reinvent anything I've done.

  4. Users can download all their data, with a simple command. No permission needed, no complex hoops to jump through. Choose a command, there's your data. Copy, paste, you've just moved.

  5. Supports RSS 2.0. That means your ideas can plug into other systems, not just mine.

There may be other ways to be silo-free. Share your ideas.

Why it's "radical"

In that post I explained why the software was "radical".

These days blogging tools try to lock you into their business model, and lock other developers out. I have the freedom to do what I want, so I decided to take the exact opposite approach. I don't want to lock people in and make them dependent on me. Instead, I want to learn from thinkers and writers and developers. I want to engage with other minds. Making money, at this stage of my career, is not so interesting to me. I'd much rather make ideas, and new working relationships, and friends.

I guess you could say I believe there are other reasons to make software, other than making money. Some people like to drive racecars when they get rich. What I want is to drive my own Internet, and for you to drive yours too.

March 27, 2015 06:21 PM

Fog Creek

Mini and Tagged Logging, Open Source Go Packages – Tech Talk

 

In this Tech Talk, Stephen, a Developer here at Fog Creek talks about two Go packages, Mini and Logging, that we recently Open-Sourced. He describes their functionality and provides a few examples of their use.

Find them both on GitHub: Mini and Logging.

 

About Fog Creek Tech Talks

At Fog Creek, we have weekly Tech Talks from our own staff and invited guests. These are short, informal presentations on something of interest to those involved in software development. We try to share these with you whenever we can.

 

Content and Timings

  • Introduction (0:00)
  • Mini (0:39)
  • Logging (3:30)

 

Transcript

Introduction

All I wanted to talk about today was a couple of libraries that we open sourced. These are two Go libraries that we wrote. The first package that we released is a little library that reads INI files and the second one is a logging package. The INI files, there really wasn’t a good alternative when we started working on the two projects. The logging I wrote for two reasons. One, the Go logging is pretty limited. The second is that I wanted to provide this feature called Tagged Logging.

Mini – INI file reader package

Let me start with this package called MINI, which basically is a small library to read INI files. For people who don’t know, an INI file is a really simple format. You can have comments, you can have key-value pairs, you can have sections. In the example that’s on the slides now, there’s a comment at the top with a semicolon. You can also do comments with hashtags. The bracketed owner has a section called owner. Name=John Doe is a key-value pair. As you can see online eight, you can quote the values. What you’re not seeing here, you can also do arrays of values. You can have 10 things and the syntax for that is that you just put empty brackets after the names. So name empty brackets would mean that there’s an array of names. That format is documented online. I put a link to the Wikipedia.

The way it works in the library, you can load this configuration in from an INI file or from what’s called a reader which, just like a stream if you’re a Java person or whatever. So you can create the reader any way you want. You get back whether or not there’s an error reading or an object that represents the configuration. I’m skipping the error handling code. If you look at this, this code won’t compile because I skipped the error handling. Then you can just go ask. You can ask for, give me a string from a section, and in the case of the file we’re looking at, all the data is in sections. I have to get it from a section. If I go back for a second. If you had put data online one that was a key-value pair, it’d be global, and you wouldn’t have to refer to it with a section.

The library also supports defaults, so if you look at this string from section, you can put in a default. That’s pretty much the whole API, which is pretty simple. There’s one other thing I’m going to talk about in a second. I also wanted to add, that the way people use this, Go has a package, they have a really simple way to get packages. They don’t have a really good way to manage versions of packages. We’ve posted this up on GitHub and people can use it just by typing go get, and then the URL there, the path to the package. It’s really easy to use the package, but, if you want to update it, if you want to deal with managing these or having multiple versions, it’s a little bit more of a pain.

The other thing we support is, if you have a structure, you can actually read the data from a section into the structure. That uses reflection to fill in the values and the structure to match whatever the values are in the data file, and that supports a raise, and ints and floats, and booleans and strings as well. That’s a pretty simple little package. The logging packs is more interesting.

Logging

The logging package is actually a pretty complete logging package. It has named loggers, it has the concept of a default logger. One of the things that I wanted to do was make it so that you could log really easily. In 99% of the cases, you just want to log to the same place. You don’t really need all of this concept of named logging.

There is this need to have different levels of logging, because, you maybe don’t want to spit out every message all the time. It can get excessive. We have log levels. There are five log levels. Four of them are pretty standard, ERROR, WARN, INFO, and DEBUG. One of them is less standard which is, VERBOSE, and I’ll talk about that a little bit. You can different formats for your log, you can have appenders. It hooks in with the Go log package. It has a couple of things that are unique to this. One is, logging is all going to happen in the background. We’re using Go routines to do logging, the concurrency feature. When you do a log message, all the processing of the log message happens in a separate Go routine which could be in a separate thread or not. It’s not happening within your run and kind of your line of running code on the processor.

Which means that, if an error happens appending to a file, we can’t give it right back to you. What we do is, we actually give that back to you through a channel which you can ignore or read or whatever. The second thing that’s interesting, is, because we have log levels, you might have the situation where you say, “Okay, I’m debugging everything as info, because the debug level is very verbose.” I don’t want to see all the log messages all the time because then I don’t want to see them all the time. If something bad happens, I may lose the ones that were important, because they are the ones that would have printed when something bad happens, but they didn’t print because I didn’t have the log level at the right place. What I implemented is this idea of a buffer. We actually saved the last n log messages.

If an error happens, if you feel like you have the memory, you can set that buffer to a very large number. We default it a couple of thousand, but you could set it very high if you wanted. If you catch the error soon enough, and change the log level, which we can do in some of the services. You would actually, it will replay the messages that are in the buffer, which means, that you could go back and see what happened. That could get a little crazy, because if you go back and say, “Well, now I want to see all the debug messages,” well there could have been a million of those. The last feature that we have is this tagged logging. What tagged logging is, is this idea that every time you call log, you can pass in tags that somehow decorate the log message. The library let’s you associate levels with tags.

As well as associating a level with the whole logging process itself, you can say something like, ‘if any message is logged with a tag HDP, I want it to be allowed at this level’, like say debug, but if its logged with the tag rabbit mq, either I just want to use the default or I want to log it at info, or something like that. Now you can filter, not just by the logger itself and maybe the named logger, but you can actually filter across, kind of perpendicular or orthogonal, based on what the message is related to, not just where the message is in the code. What’s even cooler about this is, you can use tags like account ID. If a request comes in, say to the search service that we were doing, it automatically sets up a tag for the account ID. We could potentially turn on debug logging for that account, but not for all other accounts.

Which means, if a certain account is having a problem, we could debug that account without debugging everybody else. Moreover, if you add that with the buffer, that means you can go back, and potentially if an account had a problem, go back and look at the errors that happen, or the log messages that happen for that account without having to see everything that happened for everybody. I think the tagged logging is a really cool feature. Here’s a simple example. We have this idea, when you import the package, it’s going to be called logging, you could name it something else if you want. You could set a default log level. When you do that, it only logs things at that level and above. You can set a tag level, and when you do that, the tag level can override the default level.

Then it just has normal methods to log. We have formatted methods and non-formatted methods. We’re hooked into the Go logging, so if you use the standard Go package, you can say, “I want all the standard Go things to log with these tags at this level. Then we have this buffer logging I mentioned, where you can set how big you want the buffer to be by default at zero and nothing gets buffered. Then, if things are logging and you later reset the log level to something lower, all the messages in the buffer are checked and printed out. Both the MINI package and the logging package are out on GitHub. We’ve already got some comments, we’ve already fixed a couple of bugs and added some new tests.

by Gareth Wilson at March 27, 2015 11:45 AM

March 26, 2015

Dave Winer

The best frameworks are apps

The best software frameworks are apps that do things users care about.

Back in the 80s it was dBASE and then FoxBase. 1-2-3 had a macro language, it was weak, but it was widely used because 1-2-3 was so popular with users.

Today it's WordPress.

And Slack is doing interesting things with their APIs.

Twitter too, but that got kind of muddied-up.

The best one of all of course is JavaScript, a very bizarre language in a totally underpowered environment that reaches into every nook and cranny of the modern world. It's an awful environment, you'd never design one that worked that way, but the draw of all those users makes up for its sins.

Flickr had a wonderful API, still does, but Stewart left the house before it could really blossom as a community thing. See Slack, above.

Chatting with Brent Schlender the other day, I commented that Steve Jobs' politics and mine are exactly opposite. Jobs was an elitist, all his products were as Doc said in 1997, works of art, to be appreciated for their aesthetics. I am a populist and a plumber. Interesting that this dimension of software is largely unexplored. I hope our species survives long enough to study it.

BTW, when ESR saw XML-RPC he said it was just like Unix. Nicest thing anyone could ever say. When I learned Unix in the mid-late 70s, and studied the source code, I aspired to someday write code like that. So well factored it reads like its own documentation.

Today, I'm mainly concerned with getting some outside-the-silos flow going with people I like to read. If we get (back) there, I will consider it a victory.

March 26, 2015 07:47 PM

Mark Bernstein

Software Aesthetics: Cuteness

Exploring fringes of software aesthetics, I’ve been reading up on things that are not quite beautiful. I came across this in a book review by Adam Kirsch in next month’s Atlantic:

Niceness without goodness is cuteness.

This makes a certain sense; it explains, for example, why a toddler can easily be cute but is seldom beautiful. “Goodness” here is not a moral judgment, or not just a moral judgment: an horse with a hat might be cute, but a beautiful horse is a horse doing what horses do – running gracefully through a meadow, say.

How would this work with software?

We look at a clever Perl one-liner, decipher it’s meaning, and exclaim “Nice!” But one-liners seldom do much good: even when they do something useful, it’s probably better to use a few lines to explain what you intend. Perl one-liners are cute.

Gratuitous user interface polish does little or no actual good; it’s an expense, and it seldom produces much benefit. Often, this year’s marvel may be actively pernicious in a minute or two: remember Cordovan leather backgrounds? Gratuitous polish is cute.

Under the hood, it’s entirely possible to use language features in strange and esoteric ways. You can be make C++ feel like a functional language. You can make C++ feel like Smalltalk. You can write little interpreters in C++ and do the real work in your own variant of LISP. Sometimes, this is elegant; often, it’s merely playing cute.

Games and fictions sometimes drop the mask (or the fourth wall) to attempt an arch and knowing address to the player. We’re in the middle of a complex and challenging city simulation, and suddenly notice that the factories all have silly names and make silly products, or have in-joke references to industry insiders. Archness in games and fiction is cute.

Almost all codewerk is cute.

Brent Simmons hit this nail on the head when he wrote about the problems of splitting classes that are too large and do too much into smaller, more focused objects. If you do this intelligently, you get better code. If you get carried away, you get a basket of bunny classes. Bunny classes are cute.

I realize that I’ve merely pushed most of the interesting questions into the matter of goodness. Small steps.

March 26, 2015 03:24 PM

Wikipedia And The Sea Lions

A fresh and funny study at Sea Lions Of Wikipedia of the wisdom of extending Gamergate sanctions to all gender-related disputes and controversies, plus anyone involved in gender-related disputes and controversies.

So you see, Sea Lion fans, if it’s a Bad Thing that happens largely to women — like Campus Rape and domestic violence — it’s automatically controversial!

There’s no chance that this will work; it can’t be supported rationally and it can’t be administered sanely. As far as I can make out, the plan is to ban trouble-makers (like me) left and right until everything is calm and civil, a plan excused because Gamergate is not terribly important in the great scheme of things.

The problem for Wikipedia is that gender-related issues and controversies are important. Wikipedia stumbles badly on many of them, in part because so much editing is performed by factions, cliques and trolls, and in part because so many Wikipedia editors just aren’t interested in gender beyond looking at pictures of pretty women who have misplaced their clothes.

March 26, 2015 03:12 PM

Giles Bowkett

Scrum Fails, By Scrum's Own Standards

I've raised some criticisms of Scrum and Agile recently, and gotten a lot of feedback from Scrum advocates who disagree.

My main critique: Scrum's full of practices which very readily and rapidly decay into dysfunctional variants, making it the software development process equivalent of a house which falls apart while people are inside it.

Most Scrum advocates have not addressed this critique, but among those who have, the theme has been to recite a catchphrase: "Inspect And Adapt." The idea is it's incumbent upon any Scrum team to keep an eye out for Scrum decay, and prevent it.

From scrumalliance.org:

Scrum can be described as a framework of feedback loops, allowing the team to constantly inspect and adapt so the product delivers maximum value.

From an Agile glossary site:

“Inspect and Adapt” is a slogan used by the Scrum community to capture the idea of discovering, over the course of a project, emergent software requirements and ways to improve the overall performance of the team. It neatly captures the both the concept of empirical knowledge acquisition and feedback-loop-driven learning.

My biggest critic in this Internet brouhaha has been Ron Jeffries. Here's a sampling of his retorts:


Mr. Jeffries is a Certified Scrum Trainer and teaches Certified Scrum Developer courses.

In a blog post, Mr. Jeffries acknowledged that I was right to criticize the Scrum term "velocity." He then added:

For my part in [promoting the term "velocity," and its use as a metric], I apologize. Meanwhile you’ll need to deal with the topic as best you can, because it’s not going away.

He reiterates this elsewhere:

Mr Bowkett objects to some of the words. As I said before, yes, well, so do I. This is another ship that has sailed.

The theme here is not "Inspect And Adapt." The theme is "you're right, but we're not going to do anything about it."

This isn't just Mr. Jeffries being bad at handling disagreement. It's also the case with Scrum generally. I'm not the first or only person to say that sprints should be called iterations, or that "backlog" is a senselessly negative name for a to-do list, or that measuring velocity nearly always goes wrong. Scrum also has a concept of iteration estimates which it calls "sprint commitments" for some insane reason, and this terrible naming leads to frequent misunderstandings and misimplementations.

These are really my lightest and most obvious criticisms of Scrum, but the important thing here is that people who love Scrum have been seeing these problems themselves over the twenty years of Scrum's existence, and nobody has fixed these problems yet. In each case, all they had to do was change a name. That's a whole lot of inspecting and adapting which has not happened. The lowest-hanging fruit that I can identify has never been picked.

by Giles Bowkett (noreply@blogger.com) at March 26, 2015 08:57 AM

March 25, 2015

Mark Bernstein

Unbecoming

Pullman’s wonderful trilogy of His Dark Materials is the story of Lyra, a heroine who is a superlative liar. This stark, realistic novel is the story of a heroine who steals. She begins at an early age; her family is awful and so, without our noticing it, she steals herself a new family. She’s very good: people actually are touched by her appreciation of their stuff, so touched they sometimes thank her for caring enough to take things. She winds up in Paris, broke and unhappy, working as an expert restorer and dreading the day when her husband gets out of prison and comes to ask her for the life she stole.

March 25, 2015 06:01 PM

Dave Winer

Journalism must compete

My friend Jay Rosen asks for comments on Facebook's request that content providers give them access to the full text of their stories. A NYT report earlier this week said that Buzzfeed, the NY Times and National Geographic were among the first publications that had agreed to do this.

I've commented on this many times over the years. The news industry could have seen it coming, prepared, already had a distribution system in place to close off the opportunity for tech. They didn't. That's still what they have to do. And it doesn't seem like it's too late, yet.

My advice

  1. Do the deal with Facebook. They have access to 1.4 billion people, that's huge. There's never been a news distribution system even remotely like it. How can you not try to use this system? It's as if you were a world-class skier and the Olympics asked you to compete. You of course would thank them for the honor, and go.

  2. At the same time, be part of an independent news system, one that's not captured by any company, that does what Facebook hopes to do. It's too soon to throw in the towel. The technology already exists to do this, easily. (Even the lowly Knicks show up to compete every night, in theory at least, and sometimes they embarrass the current world champs.)

  3. There's good reason to think the independent system will be much better than the Facebook-captured one, because it can offer things that no captive system can, independence and some measure of objectivity. Don't miss that Facebook has become a newsworthy entity. Expect this to develop over the years, as their audience grows to cover every person on the planet. No one can fully trust them. And you should trust the people, your readers, to know that. Offer something interesting and independent. It may never reach the size of Facebook, but it can be a sustainable, and growing service.

  4. Partner with Twitter. Encourage them to support full content as Facebook is doing. That means relaxing the 140-char limit.

  5. Run your own river on what used to be your home page. The smart ones will point not just to stories on your site, but everywhere there's news. Include all the sources your people read. You can't compete with Facebook and Twitter with a system that only contains your stories. Stop thinking so linearly. People return to places that send them away.

  6. Expand. Have a goal to have twice as many sources reporting on your site every year. Accept that the roles for your current editorial people will change as they grow to lead teams, to teach large number of sources how to go direct to readers, with integrity. (Jay: Teach this new role in J-school.)

  7. You have to become more like Facebook, but please only the good high-integrity parts. No snickering. There are a lot of good people who work at Facebook, people who really care. Some of them used to work for you. Listen and learn from them. News is changing. Be the change.

March 25, 2015 02:43 PM

March 23, 2015

Dave Winer

MyWord Editor is open source

Last week I said we'd wait to open up MyWord Editor for use by everyone until it was fully silo-free. Today the wait is over. We're ready to begin a journey, that hopefully will add new life to the open blogging world.

A shot in the arm for the open web. A way for JavaScript developers to collaborate on a new fun project. A way to escape from the silos that threaten to turn us into commercial robots, consumers and promoters, when we aspire to be thinkers and doers.

https://github.com/scripting/myWordEditor

It's radical software

These days blogging tools try to lock you into their business model, and lock other developers out. I have the freedom to do what I want, so I decided to take the exact opposite approach. I don't want to lock people in and make them dependent on me. Instead, I want to learn from thinkers and writers and developers. I want to engage with other minds. Making money, at this stage of my career, is not so interesting to me. I'd much rather make ideas, and new working relationships, and friends.

Here's the plan

  1. I know MyWord Editor is not as good as it could be, as good as it will be, once we get the ball rolling. The editor is a plain pre-HTML5 <textarea>. There are lots of great projects underway to do beautiful full-featured text editing in JavaScript. I did a survey of them last week, and have reached out privately to some of the authors of these tools. I want to get great text editing in MWE. But first I wanted to get the open source release out there.

  2. It's pretty easy to get your own MWE instance up and running. I've included instructions on how to set it up in the readme. There's a mail list where people can help.

  3. I am operating a server myself, but please think of it as a demo. I do not want to be in the hosting business. Anything you post there could disappear at any time. The best way to use MWE as a blogging tool is to set up your own server, or pool your resources with other people to set up a server. Especially with free services like Heroku, it's very inexpensive to operate a server, and fun, enabling, and you're helping the web when you do it. Remember silos are bad, even ones operated by people you like!

  4. I have tons of features I want to add. I have a huge set of debugged concepts from previous blogging systems I've done, dating back over 20 years. I'd like to add them all to MyWord. But first people have to use it. It's no fun to add features to a product no one uses it.

  5. Remember, if the past is a guide, the tech press will not write about this. So if you want people to know, you'll have to tell them. Please spread the word. Let's make something great happen, all of us, working together, to build the web we want.

  6. If you believe you can fly, you can!

Silo-free

Here are all the ways MyWord Editor is silo-free:

  1. There's an open API that connects the in-browser app to the server. So you can replace the app. Or the server. Or both.

  2. Because there's an open API, you can build anything you want at either end. You're not limited by my vision of what's possible. Let a thousand flowers bloom.

  3. The app is provided in source, MIT license. So there are no secrets. And you can use my source as the starting point for your own editor.

  4. The server is provided in source, MIT license. No secrets, etc.

  5. The app has a command that downloads all your content in JSON, so you can move your data from one server to another, at any time. If any instance removes this command, alarms should ring out all over the land. It's your content, ladies and gentlemen, not theirs.

  6. Of course every MyWord user has a great full-featured RSS 2.0 feed. We love RSS and it feeds us and we feed it, it's doing great, and anyone who disses it is a mean rotten silo-lover.

Thank you Twitter

Twitter is doing a good deed, by allowing us to use their service for identity. They have an excellent API, and their servers are reliable. And I think they're fair about what they allow and don't allow.

We're not in any way trying to usurp their business. And if there's more good stuff out there on the web, that's more stuff for people to point to from their Twitter feeds. I use Twitter, so do a lot of other people.

March 23, 2015 03:32 PM

Fog Creek

dev.life – Interview with Phil Sturgeon

#wrapperb { width: 100%; overflow: hidden; } #firstb { width: 70%; float:left; } #secondb { padding-left: 30px; overflow: hidden; } body .content .post .entry #secondb img, .previous img, .previous { border: none !important; box-shadow: none !important; } .little { font-size: 75% } .previous span { display: inline; } table.previous, .previous tbody, .previous tr, .previous td { border: 0px !important; border-bottom: 0px !important; background: transparent; text-align: center; }

In dev.life, we chat with developers about their passion for programming: how they got into it, what they like to work on and how.

Today’s guest is Phil Sturgeon, Senior Software Engineer at Ride. A former PHP-FIG member, he founded PyroCMS and was a core contributor to CodeIgniter. He’s also the author of Build APIs You Won’t Hate and regularly blogs about software development.

salvatore
Location: New York City, US
Current Role: Senior Software Engineer at Ride

How did you get into software development?

There was an online magazine at my secondary school, and they wanted students to contribute mini-sites for other students to read. There was a really, really bad games review website built with Microsoft Publisher on there, which was essentially like browsing around a PowerPoint presentation. It was awful. Like any 11-year old child I was obsessed with video games, so I decided to learn how Front Page worked so I could make a better one.

Eventually, another friend mentioned Dreamweaver and PHP, which stopped me crafting thousands of pages by hand or using iframes for content. This was the start of 15 years of learning, programming and messing about with the Internet. My parents had an IBM, with a 650 meg HDD and I think it was 8 megs of RAM. By no means as restrictive as some of the previous generations computers on their Z86’s and whatnot, but it was hardly a huge amount of fun.

So I started with PHP, JavaScript and MySQL like many, although I did try ASP for a while after my mum bought me the wrong book for my birthday. I was self-taught but then went to college at 17 to study computing formally. They taught us C++ and Prolog which was great, but a lot of it was over-normalised SQL server, Perl and HTML font tags.

ti

Tell us a little about your current role

My career used to be heavily open-source based. I’d work on multiple projects which had a business model on top or sell consulting on the side of them. Over the last few years, I have been passing my responsibilities to other developers as I try to step away from that to focus on my full-time job.

Open-source used to take up a good half of my day. Answering questions, reviewing pull requests, seeking further information from bug reports, debugging, coding, documenting, etc. It was interesting work and it did a lot for my career, but with a full-time job, that work goes into evenings and lunch breaks and when it’s as much as I was doing, it can be a real time suck.

I used to work on CodeIgniter, FuelPHP and PyroCMS. They all have amazing new developers and owners now. I used to be on the PHP-FIG too, but I gave my seat to the new PyroCMS project lead.

This has meant I can now focus on getting my work done for Ride.com, which is a whole lot of HTTP API and service-based development. It is a lot of fun, and the distributed international team is an amazing bunch of people.

I am working full-time now as a Rails developer, for the first time since I spent a few months with it in 2010. I’ve used Ruby here and there for plenty of things and the syntax is insanely easy, but jumping through the hoops of Rails and learning its rather magical conventions can really be a time sap.

With lots of remote developers, getting things done when I run into a Rails roadblock can often be a productivity and mood killer. Luckily we hired an on-site developer with extensive Rails experience, so this got me going again.

This should serve as a reminder to try and avoid investing your time and efforts too heavily into a single language, framework or tool. With the majority of the last 15 years being spent in the PHP world, I have to try extra hard when working with tools like Rails or Django, even though smaller frameworks in the same languages seem to present no problems.

When are you at your happiest whilst coding?

When that test suite finally goes green after a day of spitting E’s in your face!

What is your dev environment?

It varies depending on the project. If something is incredibly complex and going on to a well-provisioned server, I’ll use Vagrant and Chef to make the dev/prod parity as tight as possible.

If, on the other hand, I am shoving things on Heroku I’ll just run a local server with $ php -s, $ rails s or whatever unicorn-type clone that language fancies using.

I use a Macbook Pro, with Atom, AtomLinter, editorconfig, rubocop and ruby-test. If Git didn’t exist and I had to go back to handling complex merges with SVN I would probably become a Kayak instructor.

I actually started with a standing desk a month or so ago. I have a Varidesk, with a nice mat to stop my feed going dead. To keep life interesting I also have a balance board, and I’ve not face planted the desk or any co-workers just yet.

What are your favorite books about development?

The best programming book I have ever read is a soft skills book, but it’s something a lot more people should read. Team Geek is the name, and it made me think about a lot of stuff.

I’m also a huge fan of these two bloggers, Nikita Popov and Anthony Ferrera. They’re both PHP contributors who have had huge impacts on the language, and have some very intelligent things to say about PHP and programming, in general.

What technologies are you currently trying out?

I’ve been building a few services in Go for the last few months and I’m really enjoying it. The simplicity and insanely strict type safety have lead to a lot fewer bugs being made during development, and the built-in web server and async functionality has made the whole thing lightning fast.

slides

When not coding, what do you like to do?

I’ve spent a lot of time during the last year booted out of the country I live in, but as soon as I got back I broke three ribs snowboarding. Clearly I should stick to skiing next winter, but when I’m all in one piece and I’m not in the office and there’s not a foot of snow on the ground, I’ll be out on my bike doing laps of Central Park or exploring the New Jersey mountains.

What advice would you give to a younger version of yourself starting out in development?

Diversify your skill set from the get go. Don’t try to be a jack of all trades, but try to avoid becoming type-cast or a one-hit wonder.

Learning multiple languages is easier now than it ever has been with CodeAcademy and Treehouse making it obnoxiously easy to do. The more you learn the more you start to understand programming concepts, in general, instead of having the one language’s idiomatic approach baked into your head forever.

 

Thanks to Phil for taking the time to speak with us. Have someone you’d like to be a guest? Let us know @FogCreek.

 

Previous dev.life Interviews

brian_bondy
Brian Bondy
jared_parsons
Jared Parsons
Antirez
Salvatore Sanfilippo
hakim
Hakim El Hattab

by Gareth Wilson at March 23, 2015 11:37 AM

John Udell

Save FriendFeed! Why we need niche social networks

When Facebook announced its impending shutdown of FriendFeed, the chatter was predictable: It was the long-expected end of yet another failed social network. That doesn't begin to explain what FriendFeed's real value was, why that mattered, and how it might be recreated.

FriendFeed combined two major functions: group messaging and feed aggregation. Were it only a platform for group discussion it would still have been useful. Even today, that basic need isn't easy to satisfy in a lightweight and open way. But it was feed aggregation that made FriendFeed into something more: a user innovation toolkit, albeit one that was never well understood or fully exploited.

To read this article in full or to leave a comment, please click here

by Jon Udell at March 23, 2015 10:00 AM

March 22, 2015

Decyphering Glyph

Headcanon

My Castle headcanon1 has always been that, when they finally catch up with Mal (oh, and they definitely do catch up with him; the idea that no faction within the Alliance would seek revenge for what he’s done is laughable) they decide that they can, in fact, “make people better”, and he is no exception. After the service he has done in exposing the corruption and cover-ups behind Miranda, they can’t just dispose of him, so they want to rehabilitate him and make him a productive, contributing member of alliance society.

They can’t simply re-format his brain directly, of course. It wouldn’t be compatible with his personality, and his underlying connectome would simply reject the overlaid neural matrix; it would degrade over time, and he would have to return for treatments far too regularly for it to be practical.

The most fitting neural re-programming they can give him, of course, would be to have him gradually acclimate to becoming a lawman. So “Richard Castle” begins as an anti-authoritarian man-child and acquiesces, bit by bit, to the necessity of becoming an agent of the enforcement of order.

My favorite thing about the current season is that, while it is already obvious that my interpretation is correct, this season has given Mal a glimmer of hope. Clearly the reprogramming isn’t working, and aspects of his real life are coming through.

They really can’t take the sky from him.

by Glyph at March 22, 2015 08:16 AM

March 21, 2015

Tim Ferriss

TF-StitcherButton

smiley 16652277268_381ca34ec5_zI’m not high in this picture, despite my appearance.

DISCLAIMER: DO NOT USE ANY DRUGS OR SUBSTANCES WITHOUT CONSULTING A MEDICAL PROFESSIONAL. THIS IS FOR INFORMATIONAL PURPOSES ONLY.

JAMES FADIMAN, Ph.D., did his undergraduate work at Harvard and his graduate work at Stanford, doing research with the Harvard Group, the West Coast Research Group in Menlo Park, and Ken Kesey. He is the author of The Psychedelic Explorer’s Guide.

Called “America’s wisest and most respected authority on psychedelics and their use,” Jim Fadiman has been involved with psychedelic research since the 1960s. In this episode, we discuss the immediate and long-term effects of psychedelics when used for spiritual purposes (high dose), therapeutic purposes (moderate dose), and problem-solving purposes (low dose). Fadiman outlines best practices for safe “entheogenic” voyages learned through his more than 40 years of experience–from the benefits of having a sensitive guide during a session (and how to be one) to the importance of the setting and pre-session intention.

We also discuss potential “training” using lucid dreaming techniques, and that’s just the tip of the iceberg.

Cautioning that psychedelics are not for everyone, Jim dispels the myths and misperceptions. He explains how — in his opinion — psychedelics, used properly, can lead not only to healing but also to scientific breakthroughs and spiritual epiphanies.

TF-ItunesButton TF-StitcherButton

Plus, a bonus you might have missed — Sam Harris, PhD, on meditation, neuroscience, and psychedelics (stream below or right-click here to download):


This episode is brought to you by 99Designs, the world’s largest marketplace of graphic designers. Did you know I used 99Designs to rapid prototype the cover for The 4-Hour Body? Here are some of the impressive results.

QUESTION(S) OF THE DAY: If you couldn’t use drugs and wanted to experience some of the benefits discussed in this episode, what tools might you use? Please share and explore answers in the comments here.

Do you enjoy this podcast? If so, could you please leave a short review here? I read them, and they keep me going. Thanks for listening!

SHOW NOTES

These show notes were kindly provided by readers Spencer and Greg. Thanks, guys! There are two different versions, both pasted below. Be sure to also see the comments, which have great additional resources and links…

The Psychedelic Explorer’s Guide
2:45 what is micro-dosing
3:20 Albert Hofmann: https://en.wikipedia.org/wiki/Albert_Hofmann
4:45 LSD dose sizing
7:00 psychedelics are “anti-addictive”
8:25 duration of some psychedelics and micro-dosing
12:00 James Fadiman’s background
12:00 Richard Alpert AKA Ram Dass:https://en.wikipedia.org/wiki/Ram_Dass
18:20 Fadiman’s thesis at Stanford: behavioral change after psychedelic experiences
23:20 aspects of psychedelics that can contribute to overcoming addictions
23:35 ibogaine: https://en.wikipedia.org/wiki/Ibogaine
27:00 applications/similarities of different psychedelics
30:00 Alexander Shulgin: https://en.wikipedia.org/wiki/Alexander_Shulgin
and his books: PiHKAL: https://en.wikipedia.org/wiki/PiHKAL
TiHKAL: https://en.wikipedia.org/wiki/TiHKAL
33:00 psychedelics and “integrating” the experience into life
35:30 The Psychedelic Explorers Guide:http://www.amazon.com/Psychedelic-Explorers-Guide-Therapeutic-Journeys/dp/1594774021/ref=sr_1_1?ie=UTF8&qid=1308850343&sr=8-1
37:45 guidelines for “safe and successful psychedelic experience”
41:25 qualities of a “guide” or “sitter”
44:00 revisiting “integrating” psychedelic experience into life
46:45 Kennett Roshi: https://en.wikipedia.org/wiki/Houn_Jiyu-Kennett
48:20 service people and psychedelic impact
52:00 Bill Wilson: https://en.wikipedia.org/wiki/Bill_W.
52:00 Bill Wilson, AA, and psychedelics
55:40 problem solving and psychedelics
1:03:00 pattern recognition and psychedelics
1:07:50 lucid dreaming and dreaming in color
1:08:50 David Brown
1:09:45 stuttering and psychedelics
1:12:20 choice of one psychedelic versus another
1:13:50 MDMA and PTSD
1:15:45 “Reefer Madness”: https://en.wikipedia.org/wiki/Reefer_Madness
1:18:00 depression and micro-dosing
1:19:00 ketamine
1:23:00 advancing research
1:23:50 MAPS: Multidisciplinary Association for Psychedelic Studies:http://www.maps.org/
1:27:10 “The Trip Treatment” by Michael Pollan:http://www.newyorker.com/magazine/2015/02/09/trip-treatment
1:30:30 Burning Man: http://burningman.org/
1:31:00 Kary Mullis: https://en.wikipedia.org/wiki/Kary_Mullis
1:34:00 Roland Griffiths:http://www.hopkinsmedicine.org/profiles/results/directory/profile/1311852/roland-griffiths

###

James Fadiman:
Main Site: http://jamesfadiman.com/
Twitter: https://twitter.com/Jfadiman
Facebook: https://www.facebook.com/pages/The-Psychedelic-Explorers-Guide/184770801541184
Book: http://www.amazon.com/gp/product/1594774021/ref=as_li_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=1594774021&linkCode=as2&tag=offsitoftimfe-20&linkId=7VIQDTWH6S5VU6VC
Publisher: Inner Traditions http://www.innertraditions.com/
Additional Books from publisher
Psychedelic Drugs
Wikipedia: http://en.wikipedia.org/wiki/Psychedelic_drug
Levels of Dosage when considering LSD (micrograms)
400 – Transcendental experience, a guide is needed
200 – Used for psychotherapy and self exploration
100 – Can be used for problem solving situations (situations explained in podcast)
50 – “museum” or “concert” level
10/15 – Micro dose
Micro dosing – low enough dose that is “sub – perceptible” – you don’t notice the direct effects – “the rocks don’t glitter”. Could be a replacement for existing cognitive enhancers such as Adderall or Ritalin.
Organizations currently doing research on Psychedelics.
MAPS – http://www.maps.org/
Heffter – http://www.heffter.org/
Articles About Psychelics:
New Yorker Michael Pollan –http://www.newyorker.com/magazine/2015/02/09/trip-treatment
Dr. Roland Griffiths – http://janetkornblum.com/wp-content/uploads/2014/05/Stevenson_Mag_SprSmr2011_Griffiths-1.pdf

Alexander Theodore Shulgin aka Sasha Shulgin “godfather of Psychedelics”
Wikipedia- http://en.wikipedia.org/wiki/Alexander_Shulgin
Pihkal – http://www.amazon.com/Pihkal-A-Chemical-Love-Story/dp/0963009605
Tihkal – http://www.amazon.com/Tihkal-The-Continuation-Alexander-Shulgin/dp/0963009699/ref=pd_sim_b_2?ie=UTF8&refRID=0XDHXY2AMTR7W21PJMY0
Bill Wilson – founder of AA, experienced LSDhttp://www.theguardian.com/science/2012/aug/23/lsd-help-alcoholics-theory

by Tim Ferriss at March 21, 2015 07:28 PM

March 20, 2015

Mark Bernstein

Flying Trapeze

This has to be a first: over at Wikipedia, some guy want them to throw the book at me because … wait for it … I use too many links! (He’s also very angry because I asked Arbcom whether Campus Rape falls under Gamergate discretionary sanctions, when he thinks it’s obvious that it does. Okay.)

Now, lots of referees over the years have muttered to themselves that Bernstein uses an awful pile of footnotes and references a terrible lot of the research literature. Patterns of Hypertext has 76 references, Criticism has 93 – a lot for a 10-page paper. Still: if you need the references (or want a reading list), these can be useful. If you don’t, they do little enough harm. (Designing A New Media Economy has but 32 references, but it’s also got 18 footnotes, many of them fairly extensive.)

Let’s face it: links are perfect for this. If you want to gist or already know the area, skip the link. If you want more information, follow it. If you skip the link, fine — just don’t blame me for withholding information.

(I think the guy who’s complaining doesn’t understand that Google never indexes Wikipedia’s back-of-the-house, and imagines that I’m getting tons of traffic and page rank from Wikipedia. In fact, I’ve probably sent Arbcom a lot more traffic than they’ve sent me!)

Speaking of crowds, a hilarious new site, SeaLionsOfWikipedia.com, skewers the current Gamergate brawl. Here’s the latest installment: Sea Lion VOLLEYBRAWL, Part Three! Of Mops and Sticks. It’s very inside baseball, but clever jokes like the “+5 Mop of Banning” really help after an immersion in Wikipedia’s often-insufferable self-importance — especially when I’ve been responding with bombast and not alliterative verse. I don’t know who writes this site, but they’re funny and they really do understand Wikipedia’s back alleys.


Update! That was fast: if I’m reading this correctly, an actual Arbcom clerk has seriously proposed to topic-ban me from my own weblog and to place my Wikipedia user page under Gamergate Discretionary Sanctions. Because ethics? Cake on a rake!

(Meanwhile, if I’d only linked to Foucault, Haraway, and Butler, I might have saved Arbcom from the 4-credit course on Feminism in the Postmodern Era.)

March 20, 2015 03:10 PM

Dave Winer

Beautiful JavaScript editors

I spent some time over the last couple of weeks surveying the amazing collection of Medium-like editors written in JavaScript. All open source, some very good. It's just amazing to see the collective energy and ambition of the JavaScript community. I've been programming a long time, on a lot of platforms, and have never seen anything like this.

I thought about integrating one of these toolkits with MyWord Editor, currently in development. I put together testbeds for the most promising ones, ZenPen and its derivatives, Dante and medium.js. They're all on the right track, each has advantages, but imho what they all were missing was a way to enable a motivated user to run a Medium-like blogging service with their editor as the centerpiece. A next-gen blogging system, built with JavaScript, with elegance and beauty at the center. Key point: this is achievable, within easy striking distance, in the spring of 2015.

I decided that rather than pick one beautiful simple editor, I'd sprint to the finish line with the standard pre-HTML5 <textarea> editor, release the whole thing as open source, and let the community decide what to do. There appear to be some front-runners, projects that are active and ambitious, going somewhere. I want to make it easy for people to work with me, rather than be in the position to choose a "winner" at this time, based on incomplete info.

With all the incredible open energy in the JavaScript world, I don't doubt we can move the leading edge forward, if we have the will to work together. I'm putting out my best invitation. I have a great back-end, perfectly suited to host a blogging environment (I have some experience with this), and a rudimentary editor. Let's see where we can go from here.

I expect to ship my editor next week.

March 20, 2015 02:48 PM

Fog Creek

Reactive Templating Demo with Hamlet – Tech Talk

 

In this Tech Talk, Daniel, a Developer here at Fog Creek, gives a demo of Hamlet. It’s a simple, but powerful and open-source, reactive templating library that he co-created.

 

About Fog Creek Tech Talks

At Fog Creek, we have weekly Tech Talks from our own staff and invited guests. These are short, informal presentations on something of interest to those involved in software development. We try to share these with you whenever we can.

 

Content and Timings

  • Introduction (0:00)
  • Demo (0:58)
  • Multiple Bindings (1:32)
  • Inline Events (3:04)
  • Dependent Functions (4:12)
  • Checkbox and Disabling Inputs (5:33)
  • To-do List and HTML Elements (6:41)

 

Transcript

Introduction

I’m going to talk about a tool I built for client-side templating. If anyone’s familiar with the client-side templating landscape, there’s a lot of tools coming out over the past five years and even more every year and every day as JavaScript relearns things that computers have done. We want reactive interfaces. We want to make the web more powerful, more like a desktop application. So now, we have to relearn and recreate all the tools that we’ve had for 30 years, but we don’t have them on the web because it’s new. So hopefully we can learn from that and make some pretty good tools that do a good job, and this is my take looking at the landscape from a couple years ago, building a tool that met my needs very well.

Demo

So, let’s see. I guess you can see this little website here. It’s got this little guy. And I have this demo. Just trying to make it big so you can see. It kind of bumped the CSS a little. The main thing in this tool… you’re familiar with HTML templating libraries, I think Mustache is the classic one, where you threw a bunch of curly braces in and you basically just interpolate string values into your HTML.

Multiple Bindings

That’s okay I guess, but in real life you usually have specific attributes that are bound to specific values. You’re not going to cross over a tag barrier with some arbitrary strings. You’d rather have a little more structure. So given that, you can get a very nice domain-specific language. It looks exactly like this. It has… These are HTML element labels, or HTML element node types. Then these are the attributes, and I’ve got a shortcut at value syntax. So if I drag the slider, it adjusts all of them simultaneously. So I think this is what a lot of the libraries like Angular, some of the other bigger ones are going for with their… Knockout especially, with reactive data binding.

This model is a CoffeeScript object. It can just be a JavaScript object. You don’t have to use CoffeeScript. The one key attribute is this observable function, which you give it any value and it provides a way for the UI layer to have bi-directional binding. So everywhere in this template where value is listed, it’s bound to this observable object. That way the UI knows how to automatically wire it up for you. It’s the cleanest I’ve seen. So here’s that same example.

Inline Events

You can also bind events on buttons or any other element. You can just click it and it pops up. The event is just an arbitrary JavaScript function. This is the CoffeeScript syntax… basically, you’ve got a button, you want the click event to be this function. This is actually going back to the style of HTML of 1996 where you just throw inline handlers on everything. These don’t have to be implemented as inline handlers, but it’s just visually… the original way they did it made a lot of sense. They’re keeping the HTML simple.

Today, if you do have a very complicated app, you wouldn’t necessarily want to inline everything. But since this is a template and not actually HTML. It’s pretty good because this doesn’t actually do an inline function right here. It binds it using JavaScript, but it has the same value of readability, so you know what is the click action on this button. You can just read it right here.

Dependent Functions

So here, you’ve got dependent functions. This is starting to get some of the interesting stuff that this library does. It automatically tracks your dependencies for you, even through regular functions. So you have a first name, which is observable, and a last name, which is observable. You can type in whatever you want, and it automatically updates the composite name in the template. A composite name is just a function derived from these two observable values. This “at” syntax is CoffeeScript for this thought. Usually in JavaScript you have to be a little careful about using this, because based on the binding or the context in which your function’s executed, this could be different. But the runtime view layer makes sure that within your model, this is always your model, so you can basically ignore that here.

And so we see in the binding, the value is just bound to the observable, so it’s bi-directional so that when you update it, it updates the input value, and that filters back into this simple function and updates the composite value that is bound here as well.

Checkbox and Disabling Inputs

And it also just has simple checkboxes. Basically, all the HTML elements work fine.

You can toggle a value from ‘true’ to ‘false’. So, ‘disabled’ here is a way to disable a button in HTML. And by having this disabled.toggle, so it’s an observable of a boolean value, we can in real time swap it back and forth. And then, when the button’s active, you can click it, when it’s disabled, you can’t click it.

I see a lot of other libraries have really complicated ways to do stuff. But a lot of them get farther and farther from the basic HTML itself, like what is actually happening on the web page. To just put the attribute value in using a jQuery selector to do the attribute, but this is actually slightly cleaner even than that. And then with jQuery, if you want to do bi-directional binding, you have to set up a bunch of observers or listeners. It gets complicated fast.

To-do List and HTML Elements

This is the classic ‘to-do list’ app. You can see this is the full app right here, it’s just a template and a model. Other apps that are ‘to-do list’ are a hundred lines. The backbone is probably a hundred, some other new ones I think are shorter.

The add function… It just pushes the item that you create into an observable array. So the observable array proxies all the standard array methods, and then it makes… it triggers the change event, when you call any mutation that is on the array. So you can almost pretend that items is just a normal JavaScript array, as it shows up here. You can enter it using each. Then it just displays it. And because it tries to keep the exact same API as the built-in arrays, it works very seamlessly and allows you to make a very simple application using all the tools you already know.

It’s like, “oh, I have an array and I want to listen to change events, I’ll just make it observable,” and you’re basically done. You throw that into your template and it works the way you would expect. It also can run your HTML elements directly. This equal sign…

So it’s called Hamlet because it’s derived from Haml and Ruby. I think seven years ago that was popular for a little bit, and it’s been influential on a lot of other tools like Jade, or any of these SASS, LESS, these HTML and CSS meta-languages that compile down to HTML and CSS. They take away a lot of redundancy, a lot of the error-prone nature… you have to match your tags and close them and open them. Easily make mistakes. I guess the editors… the app can do that, but here you can just edit in a text area and build a to-do list. So I think it comes out slightly ahead in some respects.

That’s it for my talk. If you do have any front-end code and want to get into interactive user interfaces, I think it’s at least an interesting prospective. I hope that you consider Hamlet the next time you need to have a reactive website. Thanks.

by Gareth Wilson at March 20, 2015 11:39 AM

Ian Bicking

A Product Journal: The Evolutionary Prototype

I’m blogging about the development of a new product in Mozilla, look here for my other posts in this series

I came upon a new (for me) term recently: evolutionary prototyping. This is in contrast to the rapid or throwaway prototype.

Another term for the rapid prototype: the “close-ended prototype.” The prototype with a sunset, unlike the evolutionary prototype which is expected to become the final product, even if every individual piece of work will only end up as disposable scaffolding for the final product.

The main goal when using Evolutionary Prototyping is to build a very robust prototype in a structured manner and constantly refine it.

The first version of the product, written primarily late at night, was definitely a throwaway prototype. All imperative jQuery UI and lots of copy-and-paste code. It served its purpose. I was able to extend that code reasonably well – and I played with many ideas during that initial stage – but it was unreasonable to ask anyone else to touch it, and even I hated the code when I had stepped away from it for a couple weeks. So most of the code is being rewritten for the next phase.

To minimize risk, the developer does not implement poorly understood features. The partial system is sent to customer sites. As users work with the system, they detect opportunities for new features and give requests for these features to developers. Developers then take these enhancement requests along with their own and use sound configuration-management practices to change the software-requirements specification, update the design, recode and retest.

Thinking about this, it’s a lot like the Minimal Viable Product approach. Of which I am skeptical. And maybe I’m skeptical because I see MVP as reductive, encouraging the aggressive stripping down of a product, and in the process encouraging design based on conventional wisdom instead of critical engagement. When people push me in that direction I get cagey and defensive (not a great response on my part, just acknowledging it). The framing of the evolutionary prototype feels more humble to me. I don’t want to focus on the question “how can we most quickly get this into users hands?” but instead “what do we know we should build, so we can collect a fuller list of questions we want to answer?”

by Ian Bicking at March 20, 2015 05:00 AM

Dave Winer

MyWord Editor: silo-free from start

I came back from my trip early this week, ready to ship MyWord Editor. But I decided to make it silo-free from the start. So I had a little more work to do before release.

"Silo-free" means you can host your blog, and its editor, on your domain. I may offer a service that runs the software, but there will be no monopoly, and it will be entirely open source, before anything ships publicly.

I could have gone another way, and put up my own hosting service, as I have done with my software snacks, Little Pork Chop, Happy Friends, Radio3, Little Card Editor etc. None of those are open source, however they all run on the nodeStorage platform, which is.

I want MyWord Editor to make a very strong anti-silo statement. And even though it might make more sense from a development standpoint to do it in stages, it would muddy the message to have it be a silo when it's released. So we're going a little more slowly, to make a big strong, impossible-to-miss statement. Silos are not user-friendly. We don't want them. Not even for one version.

We'll start testing the software over the weekend, and hope to have a first public release early next week.

PS: Here's an example of a document published by MWE.

March 20, 2015 03:21 AM

March 19, 2015

Mark Bernstein

Flying Circus of Zombies

March 17 begins at the Gamergate Talk page, as many do, with the arrival of a fresh new editor. Galestar joined Wikipedia in 2009 and last edited 14 months ago. Today, he’s here to fix a problem: the Gamergate page says "misogynistic," that’s an opinion, and Galestar announces "I will be removing these adjectives as per WS:RSOPINION" .

Galestar doesn't have much editing experience and he might be excused for being rusty after 14 months away, but no: he’s got policy at his fingertips in virtually his first post. Discussion follows -- 2500 hundred words of discussion in this section alone.

This topic – the use of “misogynistic” – is not new. According to my informal survey of the million-word archives, it was discussed before on: Feb 24, Feb 11, Jan 27, Jan 25, Jan 22, Jan 9-11, Dec 22, Nov 24, Nov 13, Nov 2, Oct 27, Oct 12, Sep 19, Sep 16, Sep 11, and Sep 6. As here, the discussion is often launched by a new or zombie account; often, this account knows a ton about WikiLawyering but pretends to be unaware of all the prior discussions.

This is not the only recurrent topic. It’s just one of a half-dozen arguments which can never be settled because zombie accounts return to restart them every two weeks. This isn’t their favorite topic: those involve the sex lives of Gamergate victims. But it’s today’s topic, so away we go.

This is against Wikipedia policy, obviously. But that doesn’t matter because Wikipedia policy effectively prohibits any complaint about this kind of collusive editing. Anyone who complains – even through an indirect allusion to the existence of the phenomenon, is promptly punished. These accounts are not fresh new editors; they’re personae cultivated by Gamergate for a purpose, built from the compost of abandoned and zombie user accounts. But everyone else must pretend that brand-new editors arrive every two weeks, armed with a fresh knowledge of WikiLaw and jargon, determined to changes the consensus. Wikipedia: the encyclopedia any manilla folder on the closet shelf can edit™.

March 19, 2015 05:47 PM

March 18, 2015

Tim Ferriss

TF-StitcherButton

Raoul_-_MPEG_Streamclip_1_9_2_©_2004-2008_Squared_5Discussing life and investing with Mark Hart and Raoul Pal.

[DISCLAIMER: I’m not a doctor, nor do I play one on the Internet. Speak with a medical professional before doing anything medical-related, m’kay?]

There is something here for everyone.

This post details two jam-packed discussions  — one with world-renowned macro investors and investment strategists (Mark Hart and Raoul Pal), and another with a top performance doc you’ve referenced hundreds of times (Peter Attia, MD).

In both, we address dozens of topics, including:

- How do you choose an optimal investment style?
- What’s the most useful definition of “ROI” for lifestyle purposes?
- What are the 5 lesser-known physical tests you should consider?
- How does hormone therapy fit into the bigger performance and longevity picture (or not)?
- Productivity and exercise/diet tips from all participants.

Below, you’ll also find the most comprehensive show notes and links I’ve done to date. They’re DEEP.  If you like them, please let me know in the comments, as these take a TON of time to transcribe and summarize.

EPISODE 63 — I am interviewed by Mark Hart and Raoul Pal for Real Vision Television, which was created to combat the dumbed-down approach to finance in traditional media. Mark predicted and bet on the subprime mortgage crisis, the European sovereign default crisis, and more. As Forbes put it, related to Mark, “Sometimes, combing through a mountain of manager letters felt like reading the newspaper years in advance.” We talk about nearly everything in this roundtable.

EPISODE 65 — Peter Attia, MD, answers your most popular 10-15 questions (e.g. top blood tests, hormone therapy, increasing VO2 max, long-term ketosis, etc.), as voted on by thousands of you. Peter is President of NuSI and a tremendous endurance athlete in his own right.

TF-ItunesButton TF-StitcherButton

This podcast is brought to you by Mizzen + Main. Mizzen + Main makes the only “dress” shirts I now travel with — fancy enough for important dinners but made from athletic, sweat-wicking material. No more ironing, no more steaming, no more hassle. Click here for the exact shirts I wear most often. Order one of their dress shirts this week and get a Henley shirt (around $60 retail) for free.  Just add the two you like here to the cart, then use code “TIM” at checkout.

This episode is also brought to you by 99Designs, the world’s largest marketplace of graphic designers. Did you know I used 99Designs to rapid prototype the cover for The 4-Hour Body? Here are some of the impressive results.

QUESTION(S) OF THE DAY: What is the best investment advice you’ve ever read or heard? Please share and explore answers in the comments here.

Do you enjoy this podcast? If so, could you please leave a short review here? I read them, and they keep me going. Thanks!

And here are the copious show notes and links for both episodes…

Part 1 – Investing and Life Optimization – Episode #63 (Links and Time Stamps)

People Mentioned

Companies Mentioned

Books Mentioned

Selected Links from the Episode

Time Stamps

  • Raoul sets the stage for the conversation [3:33]
  • Tim discusses his background [4:13]
  • Mark discusses his background [5:30]
  • How Tim approaches productivity improvements [8:15]
  • How Mark implemented Tim’s advice [11:15]
  • Establishing a baseline for self-tracking [13:20]
  • Hacking 10,000 hours to mastery [17:05]
  • How to hack breakfast [21:25]
  • How to hack insomnia [22:35]
  • Hacking cheat meals [23:25]
  • Genetics testing [25:10]
  • Thoughts on time management [26:10]
  • Cold (ice bath) and heat (sauna) therapy [31:03]
  • Lucid dreaming [34:35]
  • How to find out what you are good at [39:30]
  • On Journaling [41:55]
  • Feeding your subconscious mind [45:10]
  • Tim’s calling [47:50]
  • [On constantly improving [52:23]
  • On enjoying the journey [56:00]
  • Psychological dynamics of making or losing money [57:34]

Key Takeaway Show Notes

How to Approach Productivity Improvements

There are a number of ways to try and improve the performance of a company, group of people, or single person.

If you look at it like rally car racing, you have a racetrack that is designed to kill you.

  • It is not designed to be as safe as possible. The path is somewhat known, but the terrain is unknown (it could be raining, sleeting, etc.)

People tend to have this separation of mind and body, but at the end of the day you have certain levels of neurotransmitters that are produced and depleted at a certain rate, and that is the rate-limiting step in your mental performance.

  • If you want to have better levels of working memory and sustained attention, etc – you can optimize those by optimizing the car (i.e. the body in this case).

There are also process things like what are the daily habits and ways you approach turning your effort on or off for productivity and recovery throughout the day that you can tweak.

  • This would be the example of driving the car.

On Self-Tracking

You want to establish a self-tracking baseline.

  • You don’t want to make health decisions on once-annual blood tests because if you took that test the very next day the values would be different.

What you are interested in (in terms of blood values) is not just a snapshot in time, but rather you want to understand the trends.

Journaling is also a good way to establish a baseline in terms of a daily and weekly routine to identify what led you to states of flow or what 20% of activities / people are producing 80% of your negative emotions / bad decision-making.

On Hacking

There are many ways to circumvent the 10,000-hour rule for almost any skill.

  • Study the anomalies rather than discarding them as outliers.

One easy hack is to have 30 grams of protein within 30 minutes of waking up (lentils, spinach, and two whole eggs for example).

  • By doing this, it is not uncommon to lose 20 pounds in the first month if you have 20% body fat (if you are a male).

If you have trouble sleeping, it is often due to low blood sugar.

  • You could have a tablespoon of unsweetened almond butter before you go to sleep, and you will see a lot of people who are chronically fatigued fixed immediately.

If you have to have a cheat meal you could have a tablespoon of vinegar before the meal, which will help lower the glycemic index (your glucose response to that meal).

On Time Management

Time is one of several currencies.

  • A currency is something you trade for something else.

Time is non-renewable, whereas capital is renewable.

In the hierarchy of prioritization (past a certain point of Maslow’s needs), time should take priority.

If you don’t have time it is an indication of not having sufficiently clear priorities.

On Cold (Ice Baths)

Cold exposure can improve immune function, serve as anti-depressant therapy, and effects hormones like adiponectin (which leads to increased rate of fat loss in many cases).

On Lucid Dreaming

You can further reinforce or develop your skills while you’re sleeping during lucid dreaming.

  • Lucid dreaming not only improves performance, but also helps you develop present state awareness.

On Journaling

Journaling has tremendous value, especially if you don’t view yourself as a writer.

Writing allows you to freeze your thoughts in a form that you can analyze.

You should write down your fears and worries, and explore them. This will clarify what they are.

  • Sometimes they will end up unfounded, and you can remove them as an influence.
  • Other times it will clarify how those risks can be mitigated.

Part of the value is taking these muddy distractive thoughts and imprisoning them on paper so you can get on with your day.

On Constantly Improving

Seeking constant improvement and dissatisfaction do not have to go hand-in-hand.

  • If you aren’t getting stronger, you are getting weaker.

The way you reach equilibrium, or the sensation of balance, is by having appreciation and a set of activities and practices for that.

On Enjoying the Journey

At the end of the day you have to focus on the process because, due to good or bad luck, you can get a bad result after a very good process or a great result after a very bad process.

  • You can also help avoid depression that can come from bad outcomes by enjoying the process.

 


 

Part 2 with Dr. Peter Attia – Episode #65 (Links and Time Stamps)

Medical Terminology –

People Mentioned

Companies Mentioned

Books Mentioned

Time Stamps

  • What are the top 5 biological tests everyone should get? [4:53]
  • Should you eat carbs following weight training to promote anabolism within the muscle? [12:00]
  • What are the top 10 supplement recommendations? [15:11]
  • Should the ketogenic diet be a short-term intervention or a long-term lifestyle? [20:48]
  • Blood testing at home [28:45]
  • Should you not drink alcohol? [32:40]
  • The results of Peter’s insulin suppression test [38:45]
  • How do you figure out if a ketogenic diet works for you? [47:30]
  • What type of cardio is best for you? [50:54]
  • When can we expect results from the energy balance consortium? [58:05]
  • Testosterone replacement for men [1:00:00]
  • How Peter maintains his productivity [1:06:22]

Key Takeaway Show Notes

What Are the Top Five Biological Tests?

This answer depends on an individual-by-individual basis and the risks each person faces (cardiovascular disease, cancer, etc.)

Through the lens of preventing death these five tests are the most important:

  1. APOE Genotype – helps us understand what diseases you may be more (or less) at risk for.
  2. LDL Particle Number via NMR (technology that can count the number of lipoproteins in the blood) – counts all of the LDL particles, which are the dominant particles that traffic cholesterol in the body both to and from the heart and to and from the liver. We know the higher the number of these particles the greater at risk you are for cardiovascular disease.
  3. LP(a) via NMR – This is the most atherogenic particle in the body. If this is elevated (independent of the LDL particle number) it is an enormous predictor of risk and something to act on indirectly (diet and drugs don’t seem to work as effectively in mitigating this).
  4. OGTT (Oral Glucose Tolerance Test) – This is a time 0-hour, time 1-hour, and time 2-hour test that looks at insulin and glucose. The 1-hour mark is where you may see the early warning signs with elevated glucose levels (anything over 40-50 on insulin), which can represent hyperinsulinemia (a harbinger for metabolic problems).
  5. IGF-1 (Insulin Growth Factor 1 Level) – This is a pretty strong driver of cancer. Diet can help keep IGF-1 levels low.

Should You Eat Carbs Following Weight Training to Promote Anabolism Within the Muscle?

It depends what you are optimizing for.

If your primary objective is to increase you muscle size, then yes there is a benefit to consuming carbohydrates and / or whey protein.

However, if you are someone like Peter who could care less about the size of your muscles then the answer is no you should not do that.

  • Peter doesn’t even consume whey protein post workouts anymore because he is optimizing for longevity and using caloric restriction as one method for that.

What Are the Top Supplement Recommendations?

There are few things everyone should take across the board unanimously.

  • It is highly individualized based on your needs and goals.

Peter takes:

  • Vitamin D
  • Baby Aspirin
  • Methylfolate
  • B12
  • B6
  • EPA
  • DHA
  • Berberine
  • Probiotic (which he cycles on and off of)

He takes all of these because he is managing to certain targeted levels for all of these markers that he can’t get to without supplementing.

Peter does not take:

  • Multivitamin
  • Vitamin A
  • Vitamin K
  • Vitamin C
  • Vitamin E

Should the Ketogenic Diet be a Short-term Intervention or a Long-term Lifestyle?

Peter is not sure, but questions the evidence of any society (for example, the Inuit culture) that has been claimed to have lived entirely on a ketogenic diet in perpetuity.

  • However, this doesn’t mean that ketogenic diets cannot or should not be sustain long-term.

Nobody has done a long-term study of people on ketogenic diets.

  • The data we do have is generally conflicting.

There is a lot of documentation on ketogenic diets being safe and effective, at least over the short-term (less than 1-year) for type 2 diabetes and obesity.

Peter spent 2.5 years in ketosis, but hasn’t been for a little over 1 year consistently.

  • He does still get in ketosis once per week as a result of fasting, and he feels he is at his best on a ketogenic diet.
  • The main reason to move away from it today for Peter was a craving he has had for more fruits and vegetables, which makes it hard to stay in a ketogenic diet.

Going forward he would use a diet that cycles in and out of ketosis, but it is less about him believing there is long-term harm in ketosis and more about him scratching other itches in experiencing a broader array of foods.

It is pretty clear when a ketogenic doesn’t work.

  • When C-reactive protein, Uric Acid, homocysteine, and LDL Particle numbers go up it is clear that diet is not working for that person.

On Blood Testing at Home

What is interesting is what a company like Theranos is doing, which is creating a black box that allows you to use less than a thimble of blood and use that for a very broad array of testing.

  • The goal may be to have these in places like a CVS where you can go in and put a finger prick of blood on a strip and get a wide array of testing.
  • Legal hurdles could be a challenge here.

Should You Not Drink Alcohol

Peter has never seen convincing evidence that the addition of alcohol creates a health benefit.

For some people, ethanol alcohol, up to reasonable doses, has no harm.

  • Other people are prone to having an inflammatory response from drinking even a small amount of wine or beer.

Peter recommends doing an elimination-reintroduction test.

  • Knock alcohol out of your system for 1-month while making no other change, and then slowly reintroduce it.

What Type of Cardio is Best For You?

The type of cardio activity that puts an undue stress on the heart, in terms of cardiac output, is not ideal.

The heart has to expand (open much wider) to accompany the extra blood volume.

  • If that expansion sustains for a long period of time it can result in deformation of the electrical system of the heart (particularly the right side of the heart as it is less muscular than the left).
  • This can result in electrical system failures of the heart.

At very low levels of physical activity the outcomes are not good (people don’t live that long).

At medium levels of physical activity (30-45 minutes a session / 4 sessions a week / modest output) had the best outcomes where people lived the longest.

Really high levels of physical activity (greater duration / greater intensity) resulted in the curve falling down again.

Testosterone Replacement for Men

This is a complicated topic because we live in a society where somehow we have let morality get in the way of science.

Testosterone replacement is a viable option in men with whom levels are deficient and symptoms justify the use.

  • The problem is we have this belief, which is not substantiated by rigorous science, that overstates the detriment of its use.

The data is not clear that hormone replacement in men results in an increased risk of heart disease.

  • People are more willing to accept that testosterone replacement in men actually reduces the risk of prostate cancer.

The problem with all hormone replacement is that the numbers alone aren’t significant, which means you have to treat patients based on symptoms.

How Peter Maintains His Productivity

Peter is a big fan of creating to-do lists, and he carries physical cards with him for daily, weekly, and long-term professional tasks. He also carries a personal monthly to-do list.

  • Writing these things down takes the stress out of it.
  • Most of the anxiety is worrying you will *forget* something, not feeling overwhelmed about *doing* things.

 

by jacobseb at March 18, 2015 09:21 PM

Mark Bernstein

Pride and Prejudice

Consolation in the wake of Wikipedia/Gamergate chaos and affliction. (Is Pride and Prejudice a gender-related controversy? Oh, dear…)

March 18, 2015 04:43 PM

Fog Creek

10X Programmer and other Myths in Software Engineering – Interview with Laurent Bossavit

.little {font-size: 75%}

Looking for audio only? Listen on

 

We’ve interviewed Laurent Bossavit, a consultant and Director at Institut Agile in Paris, France. We discuss his book ’The Leprechauns of Software Engineering’, which debunks myths common in Software Engineering. He explains how folklore turns into fact and what to do about it. More specifically we hear about findings of his research into the primary sources of theories like the 10X Programmer, the Exponential Defect Cost Curve and the Software Crisis.

 

Content and Timings

  • Introduction (0:00)
  • About Laurent (0:22)
  • The 10X Programmer (1:52)
  • Exponential Defect Cost Curve (5:57)
  • Software Crisis (8:15)
  • Reaction to His Findings (11:05)
  • Why Myths Matter (14:44)

 

Transcript

Introduction

Derrick:
Laurent Bossavit is a consultant and Director at Institut Agile in Paris, France. An active member of the agile community he co-authored the first French book on Extreme Programming. He is also the author of “The Leprechauns of Software Engineering”. Laurent, thank you so much for joining us today, can you share a little bit about yourself.

About Laurent

Laurent:
I am a freelance consultant working in Paris, I have this great privilege. My background is as a developer. I try to learn a little from anything that I do, so after a bit over 20 years of that I’ve amassed a fair amount of insight, I think, I hope.

Derrick:
Your book, “The Leprechauns of Software Engineering”, questions many claims that are entrenched as facts and widely accepted in the software engineering profession, what made you want to write this book?

Laurent:
I didn’t wake up one morning and think to myself, ‘I’m going to write a debunkers book on software engineering’ but it actually was the other way around. I was looking for empirical evidence anything that could serve as proof for agile practices. And while I looked at this I was also looking at evidence for other things which are in some cases, were, related to agile practices for instance the economics of defects and just stuff that I was curious about like the 10X programmers thing. So, basically, because I was really immersed in the literature and I’ve always been kind of curious about things in general, I went looking for old articles, for primary sources, and basically all of a sudden I found myself writing a book.

The 10X Programmer

Derrick:
So, let’s dig into a few of the the examples of engineering folklore that you’ve examined, and tell us what you found. The first one you’ve already mentioned is the 10X programmer. So this is the notion that there is a 10 fold difference between productivity and quality of work between different programmers with the same amount of experience. Is this fact or fiction?

Laurent:
It’s actually one that I would love to be true if I could somehow become or if I should find myself as a 10X programmer. Maybe I would have an argument for selling myself for ten times the price of cheaper programmers. When I looked into it, what was advanced as evidence for those claims, what I found was not really what I had expected, what you think would be the case for something people say, and what you think is supported by tens of scientific studies and research into software engineering. In fact what I found when I actually investigated, all the citations that people give in support for that claim, was that in many cases the research was done on very small groups and not extremely representative, the research was old so this whole set of evidence was done in the seventies, on programs like Fortran or COBOL and in some cases on non-interactive programming, so systems where the program was input, you get results of the compiling the next day. The original study, the one cited as the first was actually one of those, it was designed initially not to investigate productivity differences but to investigate the difference between online and offline programming conditions.

So how much of that is still relevant today is debatable. How much we understand about the concept of productivity itself is also debatable. And also many of the papers and books that were pointed to were not properly scientific papers. They were opinion pieces or books like Peopleware, which I have a lot of respect for but it’s not exactly academic. The other thing was that some of these papers did not actually bring any original evidence in support of the notion that some programmers are 10X better than others, they were actually saying, “it is well known and supported by ‘this and that’ paper” and when I looked at that the original paper they were referencing, they were in turn saying rather than referencing their own evidence, saying things like “everybody knows since the seventies that some programmers are ten times more than others” and very often after chasing after all the references of old papers, you ended up back at the original paper. So a lot of the evidence was also double counted. So my conclusion was, and this was the original leprechaun, my conclusion was that the claim was not actually supported. I’m not actually coming out and saying that its false, because what would that mean? Some people have taken me to task for saying that all programmers are the same, and that’s obviously stupid, so I can not have been saying that. What I’ve been saying is that the data is not actually there, so we do not have any strong proof of the actual claim.

Exponential Defect Cost Curve

Derrick:
There is another folklore item called the “exponential defect cost curve”. This is the claim that it can cost one dollar to fix a bug during the requirements stage, then it will take ten times as much to fix in code, one hundred times in testing, one thousand times in production. Right or wrong?

Laurent:
That one is even more clear cut. When you look at the data and you try to find what exactly was measured, because those are actual dollars and cents, right? So it should be the case, at some point a ledger or some kind of accounting document originates the claim. So I went looking for the books that people pointed me to and typically found that rather than saying we did the measurements from this or that project, books said or the articles said, ‘this is something everybody knows’, and references were ‘this or that article or book’. So I kept digging, basically always following the pointers back to what I thought was the primary source. And in many cases I was really astonished to find that at some point along the chain basically someone just made evidence up. I could not find any solid proof that someone had measured something and came up with those fantastic costs, sometimes come across like, fourteen hundred and three dollars on average per bug, but what does that even mean? Is that nineteen-nineties dollars? These claims have been repeated exactly using the same numbers for I think at least three decades now. You can find some empirical data in Barry Boehm’s books and he’s often cited as the originator of the claim. But it’s much less convincing when you look at the original data than when you look at the derived citations.

The Software Crisis

Derrick:
There is a third folklore, called “The Software Crisis”. Common in mainstream media reporting of large IT projects. These are studies that highlight high failure rates in software projects, suggesting that all such projects are doomed to fail. Are they wrong?

Laurent:
This is a softer claim right, so there’s no hard figures, although some people try. So, one of the ways one sees software crises exemplified is by someone claiming that software bugs cost the U.S. economy so many billions, hundreds of billions of dollars per year. A more subjective aspect of the notion of the software crisis, historically what’s interesting, is the very notion of the software crisis was introduced to justify the creation of a group for researching software engineering. So the initial act was the convening of the conference on software engineering, that’s when the term was actually coined and that was back in 1968, and one of the tropes if you will, to justify the interest in the discipline was the existence of the software crisis. But today we’ve been basically living with this for over forty years and things are not going so bad, right? When you show people a dancing bear one wonders not if the bear dances well. That it dances at all. And to me technology is like that. It makes amazing things possible, it doesn’t always do them very well but its amazing that it does them at all. So anyway I think the crisis is very much over exploited, very overblown, but where I really start getting into my own, getting on firmer ground is when people try to attach numbers to that. And typically those are things like a study that supposedly found that bugs were costing the U.S. sixty billion dollars per year and when you actually take a scalpel to the study, when you read it very closely and try to understand what methodology they were following and exactly how they went about their calculations, what you found out is that they basically picked up the phone and interviewed over the phone a very small sample of developers and asked them for their opinion, which is not credible at all.

Reaction to His Findings

Derrick:
What is a typical reaction to your findings debunking these long held claims?

Laurent:
Well, somewhat cynically it varies between, “Why does that matter?” and a kind of violent denial. Oddly enough I haven’t quite figured out why, what makes people so into one view point or the other and there’s a small but substantial faction of people who tell me ‘oh that’s an eye opener’ and would like to know more, but some people respond with protectiveness when they see for instance the 10X claim being attacked. I’m not quite sure I understand that.

Derrick:
So how do myths like these come about?

Laurent:
In some cases you can actually witness the birth of a leprechaun. Its kind of exciting. So, some of them come about from misunderstandings. I found out in one case for instance that an industry speaker gave a talk at a conference and apparently he was misunderstood. So people repeated what they thought they had heard and one thing led to another. After some iterations of this telephone game, a few people, including some people that I know personally, were claiming in the nineties that the waterfall methodology was causing 75% failure rates in defence projects, and it was all a misunderstanding when I went back and looked at the original sources the speaker was actually referring to a paper from the seventies which was about a sample of nine projects. So not an industry wide study. So I think that was an honest mistake, it just snow-balled. In some cases people are just making things up, so that’s one way to convince people, just make something up. And one problem is that it takes a lot more energy to debunk a claim than it takes to just make things up. So if enough people play that little game some of that stuff is going to just sneak past. I think the software profession kind of amplify the problem by offering fertile ground, we tend to be very fashion driven, so we enthusiastically jump onto bandwagons. That makes it easy for some people to invite others to jump, to propagate. So I think we should be more critical, there has been a movement towards evidence-based software engineering which I think is in some ways misguided, but is good news to my way of thinking that anyone is able to think, maybe we shouldn’t be so gullible.

Why Myths Matter

Derrick:
Even if the claims are in fact wrong, why does it matter?

Laurent:
To me, one of the key activities in what we do, its not typing code, its not design, its not even listening to customers, although, that comes close. The key activity that we perform is learning. We are very much a learning-centered profession so to speak. Because the very act of programming when looked at from a certain perspective is just about capturing knowledge of the world about businesses, about… other stuff that exists out there in the real world or is written about that is already virtual, but in any way capturing knowledge and encoding that knowledge, capturing it in code, in executable forms, so learning is one of the primary things that we do anyway already. I don’t think we can be good at that if we are bad at learning in general in a more usual sense. But what happens to those claims which are easy to remember, easy to trot out, they act as curiosity stoppers, basically. So they prevent us from learning further and trying to get at the reality, at what actually goes on in the software development project, what determines whether a software project is a success or failure, and I think that we should actually find answers to these questions. So it is possible to know more than we do right now, I am excited every time that I learn something that makes more sense for me that helps me have a better grip on developing software.

Derrick:
Are there any tell-tale signs that we can look out for to help stop ourselves from accepting such myths?

Laurent:
Numbers, statistics, citations, strangely. I know that citations are a staple inside of academic and scientific writing but when you find someone inside the software engineering profession that is whipping out citations at the drop of a hat, you should take that as a warning sign. And there is more but you know we would have to devote an hour or so to that.

Derrick:
Thank you so much for joining us today, it has been great.

Laurent:
Thanks for having me, bye!

by Gareth Wilson at March 18, 2015 11:17 AM

March 16, 2015

Mark Bernstein

Rot

Good news: Over The Precipice has already been tweeted or retweeted to about 200,000 people.

Bad news: The armies of Mordor found out, and they're eager to get me topic-banned for mentioning it on my Wikipedia talk page. Looks like they’ve got plenty of sympathetic administrators willing to rid them of this troublesome hypertext researcher.

Worse News: The Gamergaters oppose my topic ban appeal with awful echoes:

While MarkBernstein, writing to us from the top of the Reichstag…–Rhoark
All we get now is Reichstag climbing over admin actions… – ColdAcid

Reichstag?

March 16, 2015 06:36 PM

Dave Winer

When will 140-char wall come down?

Yesterday I ran a piece where I said Twitter's 140-char limit is probably in its final days. It's gotten a lot of attention.

Next question: When will the change be announced? Imho, it's not hard to read those tea-leaves either.

Facebook is having a developer conference next Wednesday and Thursday. It's likely that they will talk about their plans for news distribution at this conference.

David Carr ran an article in the NY Times last October that previewed the pitch we'll likely hear. They want publishers to post full content to Facebook. In return they are willing to share ad revenue with the publishers.

The reason this is so important? In a mobile environment, clicking on a link to read a story is a very big price. We all know that mobile is where Facebook's growth is coming from. News reading on mobile can become much more fluid. That's what Facebook wants to do.

So, with Facebook getting full content from at least some publishers, and the announcements likely coming next week, does Twitter really want to explain the magic of 140, and ancient limits of SMS, or would they rather have an equivalent proposition for publishers? Smartphones aren't just the future, they're here, now. It seems like this is the moment to say 140 was nice, it got us to where we are, but now because of mobile, we have to get more text from publishers, if only to remain competitive with Facebook in the one area Twitter still has an advantage (for now) -- news, especially mobile news.

Still reading tea-leaves. I have no direct info, but if Twitter doesn't make the change now, they will be too late. My bet: It'll happen before Facebook's announcements next week.

March 16, 2015 06:01 PM

Reinventing Business

Do What You Love

An old but very insightful article by Paul Graham describing one process for discovering your best life path. Also see Start With Why and What Should I Do With My Life?

by noreply@blogger.com (Bruce Eckel) at March 16, 2015 04:45 PM

Hiring: Steps in the Right Direction

This is a nice post that describes how incompetent our hiring approaches are. In particular, interviews are a terrible way to find employees, because they basically test how well a candidate interviews and not much else. The article describes some better approaches to the problem. One of my favorite observations is that if a candidate doesn't understand a technology, buy them a couple of books and give them a couple of weeks to get up to speed.

I envision a completely different way to discover whether someone is a fit, that all but the most primitive companies will eventually use. The open-spaces conferences I hold have turned out to be fertile hiring ground, for example. You get to see a candidate in various creative situations, with the added bonus that they don't even know they're being interviewed.

by noreply@blogger.com (Bruce Eckel) at March 16, 2015 04:39 PM

Giles Bowkett

And Now For A Breakdown

Here's a live drum and bass set from Mistabishi on a Korg Electribe MX.



Here's another one:



Here's a more electro-house-ish set Mistabishi recorded live at an illegal rave in London last year:



You can actually download all the patterns and patches in this from the Korg web site.

Korg also had Mistabishi demo the new Electribe – basically the next generation of the EMX – in a couple YouTube videos:



(In this one, he enters the video at around 6:45.)



The new Electribe seems pretty cool, but these videos motivated me to buy an EMX instead. The new one's untested, while the EMX is a certified classic, with a rabid fan base, tons of free downloadable sounds, tons of videos on YouTube showing you how to do stuff, forum posts all over the Internet, and rave reviews (no pun intended).

Also, the new version literally has a grey-on-grey color scheme, while the EMX looks like this:



They run about $350 on eBay, but if you find an auction ending 8am on a Sunday, you might pick one up for just $270 (I did).

It also helps that I'm a huge fan of the original Electribe, the Korg ER-1, one of the most creative drum machines ever made. I've owned two (or three, if you count the iElectribe app for iOS, based on the second iteration of the ER-1).

by Giles Bowkett (noreply@blogger.com) at March 16, 2015 04:34 PM

Ladies And Gentlemen, Nastya Maslova

A very talented young multi-instrumentalist, singer, and beatboxer from Russia, with many videos on YouTube (some under her name's Russian spelling, Настя Маслова).



by Giles Bowkett (noreply@blogger.com) at March 16, 2015 04:26 PM

Fog Creek

3 Steps to Clean Up Your Inbox with FogBugz

If you consider yourself a disciplined person, or want to be, that strives for inbox zero and Getting Things Done®, one of the first steps (paraphrased) is simplifying what you have to do. That’s where FogBugz comes in. FogBugz can help you simplify what you have to do when it comes to your inbox with Mailboxes, Notifications, and Filters.

Let’s start with a reasonable guess that you’re talking with your customers, or clients, over email. This happens all day, every day. The great thing about your email provider or email client is that it probably has super awesome search to help you find things when you need them. But your inbox is always full of emails, and things to do at different phases of your process(es), and you’re always losing track of that item you starred that you swore you were going to do a week ago. Have you considered what happens to your email workflow when your business grows and you hire 2 or 3 more colleagues? How do they see what’s going on and react independently? It would be great if everyone could see the same email threads without forwarding emails to each other or cc’ing everyone a million times over. And, you could start tying all these emails together with the bug reports you’re getting for your software, and your planned feature releases.

getting things done fogbugz

 

Here are a few steps to get your inbox cleared up, and your newly hired colleagues on the same page:

1. Let FogBugz Turn Incoming Emails into Cases

 

The first step is to get your group email account connected to FogBugz – we call that a Mailbox. Instead of using a distribution list like hello@fogcreek.com, make that an actual email account and enable IMAP/POP3 access.

create a mailbox in fogbugz

Since your customers or clients already have this email address, you don’t need to give them a new way of contacting you. Whenever you receive an email to hello@fogcreek.com, FogBugz turns that boring old email into a case!

You can even configure a custom automatic response for your mailbox. When anyone emails you (at hello@fogcreek.com), they know you’ve received their email, and that you’ll get back to them as soon as you can.

Instead of working primarily out of your inbox, you will start working primarily out of FogBugz. Your customer or client communication will be side-by-side with features and bug reports bringing everyone on your team together. Instead of your colleague yelling or putting in chat “Hey Bob, did you email Julie yet?!”, your colleague can do a quick search like below in FogBugz so that you’re not interrupted:

correspondent:"julie.voigt@mistyriversoftware.com" edited:today

Note: Copy and paste the search into your FogBugz search box and hit enter. Ensure you change the email address to one relevant to you.

Running that search you then find the case yourself and you know that Julie’s bug report and feature request are being taken care of. And you haven’t interrupted Bob.

Long story short, you’re using FogBugz to email back and forth with the people you need to email with. By doing so, you automatically get full history, audit history even, and your colleagues are all on the same page.

Pro Tip: Configure you Mailbox to automatically set a due date so you don’t have to. For example, you can set it from 1 hour to several days.

2. Configure Your In-App Notifications

 

By default, FogBugz will send you an email to alert you about new Cases (created from emails) or updates to existing Cases. Receiving email notifications about emails isn’t, in itself, a solution that moves you toward a clear Inbox. Replace email notifications with FogBugz On Demand in-app Notifications to minimize your email clutter. This allows you to get all your notifications in the web application, and remove them from your inbox.

02_mark-as-read

Quickly configure this in your FogBugz user options by choosing the “Never” option for email frequency – it’s ok, it’s not as scary as it sounds.

If you like the idea of never receiving email updates, but you want to try something in between – we have got just the thing for you! Set your email frequency to “Periodic” and you’ll get a digest of all of your notifications.

Now you’ve got a proper mailbox in FogBugz, and you’re using the in-app Notifications, what else can you do? In-app Notifications are great for interrupt-driven work, but what about when you’re trying to plan what to do next, or simply just do what’s next on the “list”?

3. Create and Share Custom Saved Searches

 

In short, the third step is to use Shared Filters in FogBugz. These will show you what you or someone else, is doing, and what you have to do next. Tidy up multiple Shared Filters with the new Grouped Filters feature.

First, start with everything that you might have missed yesterday – it’s ok, you don’t have to admit that you didn’t get to that one case yesterday (we won’t tell anyone, pinky swear). FogBugz can help you with that overlooked case today! Run a search for anything due up until today, like so:

due:"..today" orderby:due status:open assignedto:me

Note: status:open includes active and resolved cases, choose status:active if you don’t want resolved cases to show up. This may be more helpful if you’re using the postpone cases feature.

Save that filter. Then, do your colleagues a favor and share it with them so they have it too. It’ll save them precious minutes of their valuable time!

Again, those of you who are familiar with FogBugz may realize that the search above is essentially the default “My Cases” filter. Yes, in a way it is; the default “My Cases” filter will show you everything assigned to you, but it won’t order it by due date or specifically include only the cases due today – this one does.

This filter is a great start, but you should probably create another shared filter that shows all your specific Support (and/or Sales) cases due today:

due:"..today" orderby:due status:open assignedto:me project:inbox area:support

Pro Tip: Swap out ‘area:support’ with ‘(area:sales OR area:support)’ to get cases from both areas.

Maybe you’re managing the sales and support teams, or maybe you’re just curious if sales efforts have any yet-unmentioned positive or negative effects on the support team. Create a third filter for that and share it:

due:"..today" orderby:due status:open project:inbox area:support

I could go on and on about filters all day, but I’m going to stop here. The point is that you have these three filters which represent work you’re going to do, or are doing, and now you can organize them into a group, or groups, that makes sense to you. Create a “Watching” group for anything you’ll read to catch up on. Create a “Sales” group, for well, your sales cases and so on.

Screenshot of creating a new filter group and adding a filter to it

Adding your Mailbox to FogBugz, using in-app Notifications and using and sharing custom filters are three steps to get you well on your way to a cleaner inbox, and automatic collaboration with your newfound colleagues. It’s time to team up!

We offer a free 30-day trial of FogBugz – try it for free with a colleague at try.fogbugz.com. Each trial includes Kiln with Code Reviews, our source control product.

by Derrick Miller at March 16, 2015 02:47 PM

John Udell

What happened to customer service? Try Twitter

Last week my son was scheduled to take a train from Brattleboro, Vt., to Philadelphia, Pa. On the morning of his trip, he texted me this message:

I got a computerized message from Amtrak. They said there's an issue with my train. I don't know if it's late or cancelled or what's going on.

Had he checked Amtrak.com? Yes -- the story there: "Status unavailable due to service disruption."

[ See what hardware, software, development tools, and cloud services came out on top in the InfoWorld 2015 Technology of the Year Awards. | Download the entire list of winners in the handy Technology of the Year PDF. | Cut to the key news in tech with the InfoWorld Daily newsletter, our summary of the top tech happenings. ]

I tuned into @amtrak on Twitter. There was no information about this incident, but the flow of recent messages about other travel delays suggested it might be useful to tweet this inquiry:

To read this article in full or to leave a comment, please click here

by Jon Udell at March 16, 2015 10:00 AM

March 15, 2015

Dave Winer

A Medium-size hole in Twitter

Some people call "vendor sports" what I call it tea-leaf reading. I've been doing it for decades. I like to watch, and guess, based on incomplete information, what the various forces in the tech industry are doing.

A lot of times you can discern what big companies are doing, the same way astrophysicists discern the existence of black holes, by the effects they have on smaller more visible entities.

In this case, I think perhaps Twitter is getting ready to bust through the 140-character limit. Why? Because Medium appears to have given up on merging with them. How can I tell that? Well, because they went straight after Twitter, by introducing a timeline, and short Tweet-like messages, without the 140-character limit. This was Medium blinking. I was really surprised they did this. Up to that point I thought Medium was a juggernaut. That they were cleaning up. Apparently not. Apparently they needed to tap into greater growth potential. And if anyone is likely not to be overly respectful of Twitter's internal engine, it's the founder. He knows how the sausage factory really works, where the bodies are buried, etc. The stuff that we don't know.

I always thought Medium's manifest destiny was to be acquired by or to acquire Twitter (in the same way NeXT acquired Apple in 1997). The founder returns, bringing with him a feature that users are likely to adore. I said it this way: "There's a Medium-size hole in Twitter." I thought that was funny. Funny because it was so true.

I imagine, via tea-leaf-reading, that Dick told Ev, "Sorry dude, we decided to make vs buy, you want too much money, so we decided to build our own Medium clone inside Twitter." Much the way they said that Bit.ly wanted too much for their URL-shortener five years ago.

One way or the other. imho, the 140-char wall is not long for the world.

PS: I thought it would be interesting to tweet a link to this piece in Medium. I had to do it by hand of course, would be nice to hook it up to Radio3 via API. :thumbsup:

PPS: When will the 140-char wall come down? See this follow-up post.

March 15, 2015 06:57 PM

Mark Bernstein

Over The Precipice

Wikipedia’s Arbitration Committee, ever a font of hilarity, has formally responded to my Request For Clarification and unanimously decided that “Campus Rape” does indeed fall under Gamergate discretionary sanctions. (Link updated) So, apparently, does the biography of Lena Dunham, and sections of NFL pages that deal with abusive players, sexual assaults by players.

From here on out, every biography of a living woman who is disliked by the American Right will be subject to the Gamergate discretionary sanctions. Male biographies would only be involved if the men have notable gender controversies: a doctor who conducts abortions, say, or who defends the right of women. Any woman can be involved in a gender-related controversy at any time: just get people to talk about her appearance, or her sex life, or her abortion, and Bob’s your uncle.

Every biography of a lesbian, gay, transgender, or gender-queer person is apparently subject to Gamergate discretionary sanctions, since it’s obvious that these are potentially controversial and gender is involved somehow. Of course, people in heterosexual relationships are not covered as there’s no potential for controversy there, right?

This mess stems, ultimately, from Wikipedia’s anti-intellectual tradition. The Encyclopedia Anyone Can Edit™ can easily become the encyclopedia where everyone is an expert with no need to seek advice or to look stuff up. Hetero-normative? Androcentic? Oblivious?

Research needs to look more closely at Wikipedia’s capture by the right. I’ve been focusing here on sexism and Wikipedia, but Wikipedia’s roots lie in Libertarian techno-utopianism and I suspect it’s increasingly dominated by an alliance of libertarians and tea partiers. It’s a tricky question: Wikipedia is also plagued by right-wing zealots from Israel, Ukraine, and the Balkans. Of course, these aren’t separate in the way they used to be: Netanyahu’s big donor is also the big money behind Newt Gingrich.


Last week, a pair of Gamergaters attempted to out a software developer on a Wikipedia talk page. Their canard was published for eight hours before someone finally obliterated it. The perpetrators have not been sanctioned.

Meanwhile, I'm being read out of meeting for having the temerity – can you imagine? – to allude, during a discussion of an article about collusive editing of Wikipedia’s Gamergate page, to (get this) collusive editing of Wikipedia Gamergate page. I wrote:

“It is fascinating that the particular group of editors who recently were so eager to cite Gamergate wikis, weblogs, and Breitbart are reluctant to inform newcomers to this article of this important new essay. Why would that be?

No doubt this was a very wicked thing to say, though I’m not sure how. I repeat it and the Wikipedians all gasp, grow pale and flutter their handkerchiefs. It might violate WP:MOMHESLOOKINGATMEFUNNY, except that's not an actual Wikipedia policy.

There are worse things: One of those worse things is outing -- the real thing, not the Wikipedia thing. It can ruin careers and cost lives. This is not a mere content dispute or a fight about infoboxes.

You can’t make this stuff up.


At yesterday’s St. Patrick’s Day breakfast, I had a brief chat with my Representative, Katherine Clark (D-MA), who recently spoke out about Gamergate and internet harassment (Mother Jones ❧ . Tech CrunchJezebelThe Hill). She makes an excellent point: it’s not just games, and accomplished women get this all the time. Even the Congresswoman was surprised at the Twitterfed vitriol heaped on her for asserting that the government ought to protect the rights of women to be software developers if that's what they want to do.

The NY Police tried to edit Wikipedia to remove criticism of police shootings. (I’m inclined to suspect that someone tied to the Aaron Swartz prosecution did the same thing last year, incidentally.) This was only caught because the police were lazy, incompetent, and edited from work; if they'd done the work at home or at the NY Public Library, they'd have been praised.

Last week, Katherine Clark was at Selma. We are all at Selma again.

March 15, 2015 03:58 PM

March 13, 2015

Mark Bernstein

Shanley

Recently, Shanley Kane wrote a series of Twitter posts which explain what's going on at Wikipedia and why it matters. For ease of reading, I've reformatted into conventional paragraphs and lightly edited, and I’ve moved this from my Wikipedia user page because the Gamergate crew armies of Mordor complained of it.

They have been testing and perfecting a terrorism formula to ruin target's lives, attempt to get them murdered or drive them to kill themselves.

The first step is doxxing and death threats, which immediately destabilizes the target's most fundamental sense of safety and security. This often forces target to leave their home & immediately stop any career/family/social/community work as they work to re-establish safety. It creates the isolation needed for longer-term campaigns (100s of threatening, abusive, harassing messages per hour) to have max. impact. Under these conditions, the target's mental health will rapidly decline and suicide ideation, self-harm, anxiety and panic sets in. This causes lasting trauma that will not only temporarily silence but forever change the target's sense of safety and support in speaking.

They then start digging in detail through your past to "find"/"invent" things to justify harassing you and get more ammo. This has the nice bonus effect of "proving" to your "community" that you deserve harassment and aren't worth supporting = more isolation. In white-male dominated, misogynist environments, it is incredibly easy to signal to a target's community that they don't deserve support. Thus, abandonment by their community is easily secured as the target suffers from the campaign to dig up their past.

That campaign signals to any stalkers, past partners, etc. that now is the time to GET THEM. This often means cyber sexual assault & DV. As well as outing of gender identity, sexuality, etc. which threaten any career and family support system that may exist If ''anyone'' is defending you at this point, the same techniques are applied to supporters to make sure that stops. The message is clear: This target ''will'' be left completely isolated as we torture, terrorize and abuse them with impunity... or else.

This is a formula, it is known and documented, it has been tested and refined, and it is becoming more effective and scalable. So please, wake the fuck up and realize what we are dealing with here because for the 100th time: this is just the beginning. There are three ultimate goals: incite someone to murder the target, manipulate the cops to do it, or drive the target to suicide. Period.

March 13, 2015 03:06 PM

March 12, 2015

Fog Creek

Protecting the Maker’s Schedule From Chat

OK, we’ve all done it. You spotted a little issue, or couldn’t remember something. So you opened up Chat and quickly pinged a colleague about it. It wasn’t that important to you, but it was right there in the front of your mind, so you asked anyway. Harmless, right?

Well, maybe not. At least not for creatives. Developers, designers and anyone else who creates or builds things, work in a different way to most people. They work on a ‘Maker’s Schedule‘ as Paul Graham puts it, based around units of time of half a day at least. Unlike managers, sales people, and other team members who have fractured days – working in short, hourly blocks around meetings. Makers work best in blocks of uninterrupted time. For a maker, even a single meeting, or other interruption during this creative time can prove disruptive. What may seem like a minor interruption – a single question pinged on chat – distracts them and takes them out of their flow.

private_office (2)

Our solution is to file a case. Shocker – the creators of an issue tracker think cases are great. But along with private offices for developers and a healthy absence of meetings, it’s something that has been a critical part of our culture for a long time. And honestly, we just think it’s the solution that’s most respectful of a colleague’s time. By simply creating a case with the relevant details, the issue is documented and assigned to the maker in a manner that minimizes disruption. This allows them to prioritize and action the issue at a time that fits their workflow.

As a recent Creeker, I must admit this took some getting used to. I had cases for the smallest of tasks and even just questions. Cases didn’t seem as gentle as chat, but somehow harsher, maybe even passive-aggressive. I think it’s because by creating a case, it made it a Thing® rather than just a quick question. Cases are permanent, whereas chat feels temporary. But, in fact, if you raise a random question on chat then it’s doing the receiver a disservice. It suggests that what you want to discuss is more important than whatever they’re working on.

Chat can also be limbo for information. Whoever you pinged on chat may have read it, but this doesn’t mean they were really paying attention. The onus is on the recipient to handle the information received. So often, you just get back a lacklustre response and the problem gets forgotten about, lost amongst the Cat Gifs and integration notifications.

What’s more, a problem raised in a 1-to-1 chat prevents others from working on it too. So it can increase the communication cost if it has to be passed on to someone else. Chat also seems to demand an immediate response. Sure, you can set your status to ‘busy’, but we don’t always remember to do that, and how often is it noticed or respected anyway?

This isn’t to say don’t use chat. We use it all the time across the company. It’s more to do with using the right form of communication for the task or question. Do you really need an immediate response, and from that one person? The simplest thing for you to do at that moment isn’t without its consequences for others. So, don’t ping me. File a case.

by Gareth Wilson at March 12, 2015 12:27 PM

Blue Sky on Mars

Even More Emoji Abuse 🚧🚨

In 2012, I wrote a post titled Abusing Emoji in iOS and Your Mac 👑💩.

People seemed to like it, since I saw a lot more wifi networks and Macs on the network whose names consisted of emoji. If that's not progress for our society, I don't know what is.

Home directory emoji

After all my talks over the years, I somehow had never given a lightning talk until last night at SydPHP's meetup in Sydney. I knew I needed to go big and bring something huge to the table for my first time. Instead, I settled on giving a shitty one about emoji instead.

Language and Region

The first half of the talk comes from the aforementioned post; the second half is culled from my jetlag-addled brain. That half includes emoji at your local florist, trolling your British friends who never use emoji, and my new favorite, the Languages hack that lets you install emoji across all of OS X.

Slides

Video

I know it got recorded; I'm going to see if I can sneak the video in here at some point in the future.

If you're reading this four years later, then it's a safe bet that I couldn't track the video down, but don't mention it to anyone so we can trick the next person who comes along and reads this.

March 12, 2015 12:00 AM

March 11, 2015

Dave Winer

What is lock-out?

As users, we're all familiar with the idea of lock-in.

A service starts. They offer something attractive. We use it. And later find out that we can't move without leaving behind everything we created there.

That's lock-in.

Lock-out applies to developers.

A service starts, offers something attractive to users. But it has no API. It can't share its data with any other application. It can't receive data either. If a user wants to create something, they have to use their editor. And if they like the editor, they can't use it to create something that lives elsewhere. The creation and serving are bound together in a closed system. No other software can enter.

A system with APIs is part of a network of software. One without APIs stands alone. If you get an idea for a feature that would make a world of difference, you can't implement it unless they offer you a job, and once inside, let you do it. And don't cancel the project before it's done. And it isn't taken over by someone else in a organization shift. There's a reason big companies don't create new stuff, they are subject to rules that individual creators aren't. They aren't free to try new ideas out.

I guess all industries have lock-out. The moneyed people own everything, and in order to create, you have to fit in. And most really creative people don't.

Then you have systems that are not locked-out, like Twitter and Facebook, but are subject to revision at any moment. This is imho better than not having APIs at all. At least the world can get a glimpse at the idea before it's shut down. In a fully locked-out system, new ideas are stillborn.

The good news is the costs on the net today are so low that there really is no technologic or economic reason for lock-out. And so many developers want to make open source contributions. It's really just a matter of organizing the work to create open alternatives to the locked-up systems created by the tech industry. We need user-oriented frameworks for free systems. This is the one area where open rebellion is not only legal, it's encouraged. At least if you listen to and believe the proponents of free markets. We're going to find out.

I wrote a story on my liveblog on Monday about my personal experiences with lock-out.

Also see an offer I made to Medium when they started, in 2012. They could lead a new wave of silo-free writing for the web.

March 11, 2015 04:03 PM

March 10, 2015

Mark Bernstein

Prune

This is impressive writing; in the guise of writing yet another restaurant recipe book. Hamilton has written an intelligent and sympathetic response to Kitchen Confidential, a delightful portrait of a chef masquerading as a cookbook. This looks like a collection of recipes, but the recipes are written (and the book designed) not as if they’re adapted for the home cook, but instead as if they’re odd sheets of instructions to be handed to new line cooks. There are lots of canny and charming words of warning and advice – including several mentions of shortcuts that we wouldn’t take if we were “a real restaurant.”

There’s an entire chapter on garbage: how to use up food that even professional kitchens would throw away. (Example: sardine heads and bones: season, deep fry, and send ’em out to guests who are chefs, line cooks, or other professionals who’ll understand. These are not to be wasted on mere VIPs.)

Cookbooks are usually meant to be instructive; here, we’re not always offering the instructions we’d expect. In prepping the paté for a bar snack sandwich, the recipe advises that for a half batch one should make a cardboard and foil partition so you can use half the paté pan, and if you don’t know how, you should “find me and we’ll do it together.” Yes, chef. In prepping a dish based on lamb-filled wontons, the recipe calls for grabbing any intern or trailer in the house that night, because the prep is such a bitch. You don’t get this stuff from Joy of Cooking.

Recipes are scaled for service — but that often works out conveniently to 8, which is to say a dinner party, and we all know division. There’s some fun reverse snobbery at work here too: the “duck liver garbure” is made with foie gras (and, we’re warned, is not really a garbure so don’t call it that if we get a job someday in a real restaurant).

March 10, 2015 05:39 PM

Fog Creek

Working Effectively with Unit Tests – Interview with Jay Fields

.little {font-size: 75%}

Looking for audio only? Listen on

 

In this interview with Jay Fields, Senior Software Engineer at DRW Trading, we discuss his approach to writing maintainable Unit Tests, described in his book ’Working Effectively with Unit Tests’. We cover how to write tests that are maintainable and can be used by all team members, when to use TDD, the limits of DRY within tests and how to approach adding tests to untested codebases.

For further reading, check out his blog where he writes about software development.

 

Content and Timings

  • Introduction (0:00)
  • About Jay (0:30)
  • Writing Unit Tests in a Team (2:27)
  • DRY as an Anti-Pattern (3:37)
  • When to use TDD (5:12)
  • Don’t Strive for 100% Test Coverage (6:25)
  • Adding Tests to an Untested Codebase (7:50)
  • Common Mistakes with Unit Tests (10:10)

 

Transcript

Introduction

Derrick:
Jay Fields is the author of Working Effectively with Unit Tests, the author of Refactoring: Ruby Edition, a software engineer at DRW Trading. He has a passion for discovering and maturing innovative solutions. He has worked as both a full-time employee and consultant for many years. The two environments are very different; however, a constant in Jay’s career has been how to deliver more with less. Jay, thank you so much for joining us today. We really appreciate it. Can you share a bit about yourself?

About Jay

Jay:
Thanks for having me. My career has not really been focused in a specific area. Every job I’ve ever taken has been in a domain I don’t know at all, and a programming language, which I don’t really know very well. Starting with joining ThoughtWorks and being a consultant was a new thing for me. I was supposed to join and work on C# and ended up in the Ruby world very quickly. I did that for about five years, and then went over to DRW Trading to do finance, again something I’ve never done, and to do Java, something I had no experience with. That worked okay, and then I quickly found myself working with Clojure. It’s been interesting always learning new things.

Derrick:
Picking up on the book Working Effectively with Unit Tests, most developers now see unit testing as a necessity in software projects. What made you want to write a book about it?

Jay:
I think it is a necessity in pretty much every project these days, but the problem is, I think, really a lack of literature beyond the intro books. You have the intro books that are great, and I guess we have the xUnit Patterns book, which is nice enough as a reference. It’s not very opinionated, and that’s great, we need books like that also. But, if I were to say, I prefer this style of testing, and it’s very similar to Michael Feather’s approach to testing. There’s no literature out there that really shows that. Or, I prefer Martin Fowler’s style of testing, there’s no literature out there for that. I really don’t know of any books that say, “Let’s build upon the simple idea of unit testing, and let’s show how we can tie things together.” You can see some of that in conference discussions, but you really don’t see extensive writing about it. You see it in blog books, and that’s actually how my book started. It was a bunch of blog books that I had over 10 years that didn’t really come together. I thought, if I were to say to someone, “Oh, yeah just troll my blog for 10-year old posts, they’re not really going to learn a lot.” If I could put something together that reads nicely, that’s concise, people can see what it looks like to kind of pull all the ideas together.

Writing Unit Tests in a Team

Derrick:
In the book you say, “Any fool can write a test that helps them today. Good programmers write tests that help the entire team in the future.” How do you go about writing such tests?

Jay:
It’s really tough. I don’t see a lot written about this either, and I think it’s a shame. I think you first have to start out asking yourself, “Why?” You go read an article on unit testing, and you go, “Wow! That’s amazing! This will give me confidence to write software.” You go about doing it, and it’s great because it does give you confidence about the software you’re writing. But the first step I think a lot of developers don’t take is thinking about, “Okay, this is great for writing. It’s great for knowing that what I’ve just written work, but if I come back to this test in a month, am I going to even understand what I’m looking at?” If you ask yourself that, I think you start to write different tests. Then once you evolve past that, you’re really going to need to ask yourself, “If someone comes to this test for the first time, someone goes to this line of code in this test, and they read this line of code, how long is it going to take for them to be productive with that test?” Are they going to look at that test and say, “I have no idea what’s going on here.” Or, is it going to be obvious that you’re calling this piece of the domain that hopefully everyone on the team knows, and if they don’t, it follows a pattern that you can get to pretty easily. I think it’s a lot about establishing patterns within your tests that are focused on team value and on maintenance of existing tests, instead of focused on getting you to your immediate goal.

DRY as an Anti-Pattern

Derrick:
You also mentioned that applying DRY, or don’t repeat yourself, for a subset of tests is an anti-pattern. Why is this?

Jay:
It’s not necessarily an anti-pattern, I think it’s a tradeoff. I think people don’t recognize that merely enough. You’re the programmer on the team. You didn’t write the test. The test is now failing. You go to that test, and you look at it, and you go, “I don’t know what’s going on here. This is not helpful at all. I see some field that’s magically being initialized. I don’t know why it’s being initialized.” At least if you’re an experienced programmer, then hopefully you know to go look for a setup method. But imagine you have some junior guy, just graduated, fantastic programmer. He’s just not really familiar with xUnit frameworks that much. Maybe he doesn’t know that he needs to look for a setup method, so he’s basically stuck. He can’t even help you at that point without asking for help from someone else. DRY’s great. If you can apply DRY on a local scale within a test. It’s fantastic if you can apply it on a global scale, or across the whole suite, so that everybody’s familiar with whatever you’re doing then that’s great too. That helps the team, but if you’re saying that this group of tests within this trial behave differently than these tests up here, you’re starting to confuse people. You’re taking away some maintainability, and that’s fine, maybe you work by yourself so no one’s confused because you did it, but recognize that tradeoff.

When to use TDD

Derrick:
You’re a proponent of the selective use of Test-Driven Development. What type of scenarios are helped by TDD, and when should a developer not apply such techniques?

Jay:
It’s a personal thing. For me, I think I can definitely give you the answer, but I would say everybody needs to try TDD, just try it all the time. Try to do it 100% of the time, and I think you’ll very likely find that it’s extremely helpful for some scenarios, and not for others. Maybe that’ll differ by person, so everyone should give it a try. For me personally, I’ve found when I know what I want to do, if I pretty have a pretty mature idea of what needs to happen, than TDD is fantastic because I can write out what I expect, and then I can make the code work, and I have confidence that it worked okay. The opposite scenario where I find it less helpful is when I’m not quite sure what I want, so writing out what I want is going to be hard and probably wrong. I find myself in what I think is kind of a wasted cycle of writing the wrong thing, making the code do the wrong thing, realizing it’s wrong, writing the new thing that I expect which is probably also wrong, making code do that, and then repeating that over and over, and asking myself, “Why do I keep writing these tests that are not helpful? I should just brainstorm, or play around with the code a little bit, and then see what the test could look like to once I have a good idea of the direction it is going.

Don’t Strive for 100% Test Coverage

Derrick:
You say you’re suspicious of software projects approaching 100% test coverage. Why is this, and why is 100% coverage not necessarily a goal we should all strive for?

Jay:
Earlier on I thought it was a good idea, 100%, because I think a lot of people did. I remember when Relevance used to write in their contracts that they would do 100%, and I thought, “Man, that’s really great for them, their clients.” Then you start to realize you need a test things like, say you’re writing in C#, and you have to automatically generate a set. Do you really want to test that? I think all of us trust that C# is not going to break that functionality in the core language, but if you put that field in there, then you have to test it if you want to give 100% coverage. I think there will be cases where you would actually want to do that. Let’s say you’re using some library, and you don’t really upgrade that library very often, and even though you trust it, maybe you’re writing for NASA or maybe you’re writing for a hospital system – something where if it goes wrong, it’s catastrophic. Then you probably want to write those tests. But if you’re building some web 2.0 start up, not even sure if the company is going to be around in a month, you’re writing a Rails app, do you really want to test the way Rails internals work? Because you have to, if you want 100% coverage. If you start testing the way Rails internals work, you may never get the product out there. You’ll have a great test suite for when your company runs out of money.

Adding Tests to an Untested Codebase

Derrick:
So that’s what’s wrong with too many tests. What about a code base with no tests? How can you approach getting test coverage on an untested code base?

Jay:
I think the focus for me is really return on investment. Do you really need to test everything equally when the business value is not the same? Let’s say for instance, you’re an insurance company. You want to sell insurance policies. So you need a customer’s address, and you need a social security number, probably, to look them up, some type of unique key, and after that, you just want to charge them. Maybe you need their billing details, but you don’t really care if you got their name wrong, you don’t really care if you got their age wrong. There are so many things that aren’t really important to you. As long as you can keep sending them bills and keep getting paid and find the customer when you need to, the rest of the stuff is not as important. It’s nice. Whenever you send them the bill, you want to make sure the name is correct, but it’s not necessary for your software to continue working.

When I’m writing tests, I focus first on the things that are mission critical. If the software can’t succeed without a function working correctly, or a method working correctly, then you probably need some tests around that. After that you start to do tradeoff, basically looking at it and figuring out, “Well, I want to get the name right. If I get the name wrong, what’s the cost?” Well, getting the name wrong I’m not sure if there’s much of a cost other than maybe an annoyed customer that calls up and says, “Can you fix my name?” So, you have maybe have some call center support. I’m guessing that the call center is going to be cheaper than the developer time. So do we want to write a test with a Regex for someone’s name and now we need UTF support, and now we need to support integers because someone put an integer in their name? And you get in to this scenario where you are maintaining the code and the tests. Then maintaining the tests is stopping you from getting a call center call. It’s probably not a good tradeoff. I just look at the return in investment of the tests, and if I have way too many tests, every amount of code needs to be maintained. If I have too many tests, than I need to delete some because I’m spending too much time maintaining tests that aren’t helping me. If I have not enough tests, then it’s very simple, just start to write some more. I guess I tend to do that with whenever a bug comes in for something that’s critical. Hopefully, you caught it before then, but occasionally they get into production, and you write tests around that. I always think to myself, whenever the bug comes in, what was the real impact here?

Common Mistakes with Unit Tests

Derrick:
What are some of the common mistakes you see people making when writing unit tests?

Jay:
I think the biggest one is just not considering the rest of the team, to be honest. It’s really easy to do TDD. You write a test, and then you write the associated code, and you just stop there. Or, you do that, and you then you apply your standard software development patterns, so you say, “How can I DRY this up? How can I apply all the other rules that have been drilled into me that I need to do with production code? What’s really important?” Now if understanding the maintenance, understanding that what will help you write the code doesn’t necessarily mean it’s going to help you maintain the code. What I find myself often doing, actually, is writing the test until I can develop the code, so I know that everything works as I expect, then deleting that test and writing a different test that I know will help me maintain it. I think the largest mistake people make is they don’t think about the most junior member of the team. Think about this very talented junior member on your team that joined not that long ago. Are they going to be able to look at this test and figure out where to go from there? And if the answer’s no, then that might not be the best test you could write for the code.

Derrick:
Beyond your book, can you recommend some resources for developers interested in learning more about writing effective tests?

Jay:
I think that there are great books that help you get started. I think the Art of Unit Testing is a great book to help you get started. There’s the xUnit Patterns book that’s really good. The problem is, I really don’t think there’s much after that. At least I haven’t found much. Kevlin Henney has done some great presentations about test-driven development. I personally really like Martin Fowler’s writing and Michael Feathers’ writing and Brian Marick, but I don’t know of any books. I really think that there’s room for some new books. I think that, hopefully, people will write some more because unit testing’s only going to be more important. It’s not going away, it’s not like people think this is a bad idea. Everybody thinks this is a great idea. They just want to know how to do it better.

Derrick:
Jay, thank you so much for joining us today. It was a pleasure.

Jay:
Yeah, it was great! Thank you very much for your time.

by Gareth Wilson at March 10, 2015 11:43 AM

Ian Bicking

A Product Journal: As A Working Manager

One of the bigger changes going from engineer to manager was to redefine what I meant by the question: how are we going to do this? As an engineer I would deconstruct that question to ask what is the software we need to build, and the technical barriers we need to remove, to achieve our goals. As a manager I would deconstruct that question to ask what is the process by which we achieve our goals.

When I wear my manager hat and ask “how are we going to do this?” I get a little frustrated when I get the answer “I don’t know.” But that’s unfair – there are always problems to which we do not know the answer. What makes me frustrated is when the answer comes too quick, when someone says “I don’t know” because they are missing something they feel they need to come up with an answer. I don’t know because we have to write more code before we know if the idea is feasible. I don’t know because the decision is someone else’s, and so on.

You know! If the decision is someone else’s, then the answer to the question is: we are going to do this by asking that other person what they want and how they are going to make that decision. If we don’t know if the idea is feasible, then the answer to the question is: we are going to do this by exploring the feasibility of this technique, and doing another iteration of planning once we know more. “I don’t know because…” is fine because it is an answer of sorts, it lets the team make an answer in the form of a process. “I don’t know.” – ended with a period – is even okay for a moment, if you treat it as meaning “I don’t know so we are going to do this by learning.” It’s the “I don’t know, let’s move on” that I don’t like.

But I’m being a little unfair. It’s my job as a manager to answer at the process level. While I try very hard not to pigeonhole people, maybe I should also work harder at accepting when people establish bounds to their role. When you are trying to produce it can make sense to stay focused, to resist going meta. When you are working in a team, you should rely on the diverse skills of your teammates to let go of certain parts of the project. It can be okay to go heads-down. (Sometimes; and sometimes everyone on the team must lift their heads and respond to circumstance.)

This is a long-winded way of saying that I appreciate more of the difference in perspective of an engineer and a manager. It’s hard to hold both perspectives at once, and harder still to act on both.

In my new project I am returning to development, and entering into the role of working manager, an odd way to say that I am performing the same tasks that I am also managing. I cut myself off from programming when I started management so that I would not let myself be distracted from a new role and the considerable learning I had to do. Returning to programming, I can tell I was right to do so.

Moving between these two mindsets, and two very different ways of working, is challenging. In both I want to be proactive, but as a manager towards people, and as an engineer towards code. With people I’m investing my time in small chunks, trying to keep a good velocity of communication, watching for dropped balls, and the payoffs are largely indirect and deferred. With code it takes time to navigate my next task, I want to focus, I’m constantly trying to narrow that focus. And the payoff is direct and immediate: working code. This narrowed focus is a way to push forward much more reliably, so long as I know which way is forward.

But I’m a working manager. Is now the right time to investigate that odd log message I’m seeing, or to think about who I should talk to about product opportunities? There’s no metric to compare the priority of two tasks that are so far apart.

If I am going to find time to do development I am a bit worried I have two options:

  1. Keep doing programming after hours
  2. Start dropping some balls as a manager

I’ve been doing a little of both. To mitigate the effect of dropping balls I’ve tried my best to be transparent about this. It may have been effective: I am not doing my best work on X, because I’m trying to do my best work on Y. But I won’t really know if this has worked until later, turnaround on relationship feedback takes a while.

An aside: I’ve been learning a bit about Objectives and Key Results, a kind of quarterly performance analysis structure, and I particularly appreciate how it asks people to attempt to achieve 70% of their identified goals, not 100%. If you commit to 100% then you’ve committed yourself to a plan you made at the beginning of the quarter. You’ve erased your agency to prioritize.

Anyway, onward and upward, and wish me luck in letting the right balls drop.

by Ian Bicking at March 10, 2015 05:00 AM

March 09, 2015

Fog Creek

dev.life – Interview with Hakim El Hattab

#wrapperb { width: 100%; overflow: hidden; } #firstb { width: 70%; float:left; } #secondb { padding-left: 30px; overflow: hidden; } body .content .post .entry #secondb img, .previous img, .previous { border: none !important; box-shadow: none !important; } .little { font-size: 75% } .previous span { display: inline; } table.previous, .previous tbody, .previous tr, .previous td { border: 0px !important; border-bottom: 0px !important; background: transparent; text-align: center; }

In dev.life, we chat with developers about their passion for programming: how they got into it, what they like to work on and how.

Today’s guest is Hakim El Hattab, a front-end developer and Co-founder of Slides, an online presentation tool. He previously worked at Qwiki and Squarespace, and he publishes experiments exploring interaction and visual effects on his site.

salvatore
Location: Stockholm, Sweden
Current Role: Co-founder of Slides

How did you get into software development?

I think I was twelve years old when a computer first entered our home. It was a Macintosh Classic and I lost many hours to some of the great games it came pre-installed with. Five years later I started building HTML sites with a friend. By “building” I am referring to the process of slicing a Photoshop design into bitmaps, using Dreamweaver to piece those bitmaps together in a huge table and finally uploading the multi-megabyte spectacle to a free host.

After that, I went on to study video post-production and animation while teaching myself Flash and ActionScript on the side. Eventually, the things I had taught myself led to a job opportunity as a Web Developer and I decided to drop out of school.

ti

Tell us a little about your current role

Last year I departed from my job as Lead Interface Engineer at Squarespace in NYC and moved back home to Sweden to go full-time on Slides. Slides is a service for creating and sharing presentations that I co-founded together with Owen Bossola. It’s based on reveal.js, an open-source HTML presentation framework that I’ve been working on for a few years now.

My typical day starts with catching up on support tickets and email, followed by product work, which includes a healthy mix of code and design. I work from home and try to make use of the flexibility that comes with that by taking a longer lunch break and heading outside with my wife and daughter. In the afternoon when Owen wakes up in NYC, we catch up on any current topics via chat.

We offer the ability to export Slides presentations to PDF, but this functionality has been plagued with issues since its release. We were originally using PhantomJS on a Linux server, but the output suffered from poor font rendering, missing web fonts and broken links.

After testing a number of options, we recently decided to switch to generating PDFs using a slightly modified version of wkpdf on an OS X server. This provides excellent font rendering and is powered by a more recent version of WebKit.

When are you at your happiest whilst coding?

I enjoy coding the most when I’m going fast. Nothing goes up against the feeling of a fresh start. A new project and a blank document. Unencumbered by legacy.

slides

What is your dev environment?

My setup is pretty lightweight, I use a 13” MacBook Air and do all my coding in Sublime Text. An editor plugin that I really like is Git Gutter. It indicates which lines of code have changed locally in a very unobtrusive way.

One app I’ve been really happy with lately is f.lux. It adjusts the color temperature of your display depending on the time of day to make it easier on the eyes. Particularly useful when you live up north where it’s dark for most of the days.

Having survived one too many back-breaking chairs I recently decided to get myself a Herman Miller Embody. Highly recommended.

What are your favorite books about development?

I had a great laugh reading Peter Welch’s “Programming Sucks”.

What technologies are you currently trying out?

Service Workers are looking very promising. I’d love to experiment and see if that’s one way we could support offline editing in Slides.

When not coding, what do you like to do?

I recently became a Father so lately I’ve been spending a lot of time with my daughter. Besides that, I love photography and try to find time for some video games when the rest of the family are asleep.

What advice would you give to a younger version of yourself starting out in development?

Finishing and releasing projects is more important than learning every language and framework.

 

Thanks to Hakim for taking the time to speak with us. Have someone you’d like to be a guest? Let us know @FogCreek.

 

Previous dev.life Interviews

bob_nystrom1
Bob Nystrom
brian_bondy
Brian Bondy
jared_parsons
Jared Parsons
Antirez
Salvatore Sanfilippo

by Gareth Wilson at March 09, 2015 01:38 PM

John Udell

Why mobile apps are a step backward

A few years ago I attended a conference organized by Brian Fitzpatrick. Here's part of the email Brian sent to attendees:

This year we've worked out a discount rate of $119 a night with the Marriott which is about two blocks away from the Innovation Center. You can make a reservation at this rate by following this link: http://www.marriott.com/hotels/travel/chidm-chicago-marriott-at-medical-district-uic/?toDate=1/22/12&groupCode= ORDORDA&fromDate=1/20/12&app=resvlink

Beautiful! Rather than describing where to go on the Marriott site and what information to plug into the form when I got there, Brian composed a URL that encapsulated that data and transported me into Marriott's app in the right context.

To read this article in full or to leave a comment, please click here

by Jon Udell at March 09, 2015 10:00 AM

March 08, 2015

Dave Winer

The next version of myword.io

I've had it in the back of my head for a while that I should just go all the way with a simple browser-based blog post editor, that with a single click would produce a modern essay page, like the ones Medium puts out. After all, when I complain that people post their essays in a locked-up silo, what are the easy alternatives?

Tumblr is trying to get there, apparently, but there's still a lot more to Tumblr than there is to Medium. So it's not immediately obvious how to substitute one for the other. And immediate obviousness is required for the limited attention we have these days. Miniscule attention, actually.

Last Monday I decided to spend a few days taking myword.io to the next step. To add an editor that publishes stories to their own static pages. I have a very good back-end, written in Node.js, that's all set up to do this. I started with the MacWrite demo program, and it took five days to get all the way through it. I can now publish from the browser to a static web page, with simplicity and beauty.

Now, the editor is not as good as Medium's. Mine doesn't have comments, for example. But this will serve as a way for people to relatively easily participate in the open web, without locking into a silo. And if it proves popular, there can be more versions.

Not sure when I will open this up, but I wanted to put it out there that it is coming.

PS: As always this will likely not get covered by the tech press, so if you want people to know about it, you'll have to tell them.

March 08, 2015 06:34 PM

Mark Bernstein

Awesome

Obama at Selma (must-read: Fallows’ on “Finally I Hear a Politician Explain My Country Just the Way I Understand It”)

That’s what America is. Not stock photos or airbrushed history, or feeble attempts to define some of us as more American than others. We respect the past, but we don’t pine for the past. We don’t fear the future; we grab for it. America is not some fragile thing. We are large, in the words of Whitman, containing multitudes. We are boisterous and diverse and full of energy, perpetually young in spirit. That’s why someone like John Lewis at the ripe old age of 25 could lead a mighty march.

Oh my: “a mighty march.” We all know who that sounds like.

The American instinct that led these young men and women to pick up the torch and cross this bridge, that’s the same instinct that moved patriots to choose revolution over tyranny. It’s the same instinct that drew immigrants from across oceans and the Rio Grande; the same instinct that led women to reach for the ballot, workers to organize against an unjust status quo; the same instinct that led us to plant a flag at Iwo Jima and on the surface of the Moon

Men and women, torch and bridge, revolution and tyranny, ocean and river, women and labor, the world war and the moon: we know who that sounds like. That’s Lincoln. But the contraction — the contraction is purest Reagan: even now, Obama reaches out for those with ears to hear.

And what Republican today would dare mention that river?

The speech is filled with presidential echoes.

When it feels the road is too hard, when the torch we’ve been passed feels too heavy, we will remember these early travelers, and draw strength from their example.

Hello, Jack.

And once more, we close at the very beginning:

We honor those who walked so we could run.  We must run so our children soar.  And we will not grow weary.  For we believe in the power of an awesome God, and we believe in this country’s sacred promise.

We worship an awesome God in the blue states….

March 08, 2015 05:16 PM

March 07, 2015

Greg Linden

Data Maven from Crunchzilla: A light introduction to statistics

Crunchzilla just launched Data Maven!

Data Maven from Crunchzilla is a light introduction to statistics and data analysis.

For too many teens and adults, if they think about statistics at all, they think it's boring, tedious, or too hard. Too many people have had the experience of trying to learn statistics, only to get bogged down in probability, theory, and math, without feeling that they were able to do anything with it.

Instead, your first exposure to statistics should be fun, interesting, and mostly easy. Data Maven from Crunchzilla is more of a game than a tutorial. To play, you answer questions and solve problems using real data. Statistics is your tool, and data provides your answers. At the end of Data Maven, you'll not only know a bit about statistics, but also maybe even start to think of statistics as fun!

Like programming, statistics and data analysis are tools that make you more powerful. If you know how to use these tools, you can do things and solve problems others cannot. Increasingly, across many fields, people who understand statistics and data analysis can know more, learn more, and discover more.

Data Maven is not a statistics textbook. It is not a statistics class. It is an introduction. Data Maven demystifies statistics. Teens and adults who try Data Maven build their intuition and spark their curiosity for statistics and data.

Please try Data Maven yourself! And please tell others you know who might enjoy it too!

by Greg Linden (noreply@blogger.com) at March 07, 2015 07:09 PM

Giles Bowkett

Why Ron Jeffries Should Basically Just Learn To Read

Dear Agile, Your Gurus Suck


Like any engineer, I've had some unproductive conversations on the internet, but I caught a grand prize winner the other day, when Agile Manifesto co-signer Ron Jeffries discovered Why Scrum Should Basically Just Die In A Fire, a blog post I wrote months ago.

I tried to have a civil conversation with him, but failed. It was such an uphill climb, you could consider it a 90° angle.

I answered Mr. Jeffries's points, but he ignored these remarks, and instead reiterated his flawed criticisms, at length, on his own blog.

If you're the type who skips drama, I'm sorry, but you want a different post. I wrote that post, and published it on my company's blog. I got some better criticism of my anti-Scrum blog posts, and I responded to it there. I also heard from Agile luminaries like Mr. Jeffries, but unfortunately, none of the insightful criticism came from them.

Because of Mr. Jeffries's high profile, I have to respond to his "rebuttal." But he failed at reading my post, and again at reading my tweets, so my rebuttal is mostly going to be a Cliff's Notes summary of previous remarks.

But let's get it over with.

It's Not Just Me


RSpec core team member Pat Maddox said this after seeing the Twitter argument:


"Ward" is Ward Cunningham – another Agile Manifesto co-signer, the inventor of the wiki, and one of the creators of Extreme Programming. I'm not a huge fan of methodologies, but in my opinion, Extreme Programming beats Scrum. Even if you disagree, for people in Agile, yes, a Ward Cunningham recommendation ought to be solid, every time.

Here's an anonymous email I got from somebody else who witnessed the flame war:
We had a local “Scrum Masters” meet up here in [my home town] maybe 5 years ago. At the time my wife was trying to figure out a career transition for herself and a friend recommended becoming a Scrum Master. I’d been doing Scrum for a couple of years and I was friends with the organizers so I sent her over to that meet up. She came back and told me it was an epic fail and Ron Jeffries was the most unprofessional, rude speaker that she’d ever seen. In hindsight, it was probably a good thing that it turned her off Scrum completely.

Ron Jeffries was rude to most of the people asking questions about how to implement Scrum. One person in particular mentioned having an offshore team and some of the challenges involved in making Scrum work with a geographically distributed development team.

Ron Jeffries’ response was “Fire all the folks not in your office. If you’re trying to work with people offshore, you’ve already failed. Next question."

Even if this was 2010 and the prevailing thought in our industry was that offshoring and remote work didn’t work, his response was just tactless. He had similar short, rude responses to nearly every question that night and it resonated negatively with everyone there. I don’t think they ever asked him back there again.
To his credit, Mr. Jeffries starts out his blog post by acknowledging that he could have been more polite with me:
My first reaction was not the best. I should have compassionately embraced Giles’s pain... But hey, I’m a computer programmer, not a shrink.
This "apology" implies my expectation of basic common courtesy is equivalent to mental illness.

Mr. Jeffries then stumbles through several failures at basic reading comprehension.

Basic Context Fail


We'll start simple. In my post, I wrote:
I've twice seen the "15-minute standup" devolve into half-hour or hour-long meetings where everybody stands, except for management.
I then gave examples.
At one company...

At another company...
Mr. Jeffries used the singular every single time he described my teams, my projects, or the companies I worked for:
Things were not going well in planning poker and Giles’s team did not fix it...

Giles’s team’s Product Owner...

Why didn’t the time boxes do that for Giles’s team?

...I’m sad that Giles suffered through this, and doubtless everyone else on the project suffered as well...

And I’m glad Giles remarks that Scrum does not intend what happened on his team. It does make me wonder why this article wasn’t entitled “Company XYZ and Every One of its Managers Should Just Die in a Fire.”
So he didn't even get that when I said I was talking about two companies, I was talking about two companies.

Advanced Context Fail


In fairness, you only had to miss the point of four or five paragraphs in a row to miss that detail, and my post contained many paragraphs. However, the point of those paragraphs was that you see Scrum practices decay at multiple companies because Scrum practices are vulnerable to decay. They have a half-life, and it is brief.

Mr. Jeffries missed this point. He also failed to recognize hyperbole which I'd thought was glaringly obvious. I said in my post:
In fairness to everybody who's tried [planning poker] and seen it fail, how could it not devolve? A nontechnical participant has, at any point, the option to pull out an egg timer and tell technical participants "thirty seconds or shut the fuck up." This is not a process designed to facilitate technical conversation; it's so clearly designed to limit such conversation that it almost seems to assume that any technical conversation is inherently dysfunctional.

It's ironic to see conversation-limiting devices built into Agile development methodologies, when one of the core principles of the Agile Manifesto is the idea that "the most efficient and effective method of conveying information to and within a development team is face-to-face conversation," but I'll get to that a little later on.

For now, I want to point out that Planning Poker isn't the only aspect of Scrum which, in my experience, seems to consistently devolve into something less useful.
These egg timers really exist. I added "shut the fuck up" to highlight the ridiculousness of waving an egg timer at somebody in order to make a serious business decision. Somehow, however, what Mr. Jeffries took away from those words was that I actually, in real life, attended a planning poker session where somebody told somebody else to shut the fuck up.

Quoting from his post:
What’s going on here, and why is planning poker being blamed for the fact that Giles seems to be surrounded by assholes?

...People do cut their fingers off with the band saw, and I guess people do get told to shut the fuck up in a planning poker session.

Should not happen. Should never happen...

No one in Scrum wants abusive behavior...

Things were not going well in planning poker and Giles’s team did not fix it. That makes me sad, very sad, but it doesn’t make me want to blame Scrum. Frankly even if Scrum did say “Sometimes in planning meetings tell people to shut the fuck up”, that would be so obviously wrong that you’d be a fool to do it...

someone with a pair could take an individual aside and tell them not to be a jerk.
I don't even get how Mr. Jeffries took that literally in the first place.

But I'll rephrase my point: building conversation-limiting rules into planning poker contravenes the basic Agile principle that face-to-face conversation is important. Additionally, these conversation-limiting rules provide an easy avenue through which the Scrum process can devolve into something less Agile – and also less productive – and I believe Scrum has several more design flaws like that.

Offensive Agreement


Since Mr. Jeffries equated asking for basic courtesy with asking for free therapy, it's probably no shock that his condescending tone persists even when he acknowledges my points. I pointed out naming errors in Scrum, and he responded:
I’m not sure I’d call those “conceptual” flaws, as Giles does, but they’re not the best possible names. Mary Poppendieck, for example, has long objected to “backlog”. Again, I get that. And I don’t care...

Giles objects to velocity. I could do a better job of objecting to it, and who knows, somewhere in here maybe I have, but the point is a good one.
This is an adult, intending his words for public consumption. Even when he sees I'm right, he dismisses my critique in favor of his own critique, even though he also acknowledges that his own critique probably doesn't exist.



Still, this is the apex of maturity in Mr. Jeffries's post, so I'll try to be grateful. We do at least agree "sprint," "backlog," and "velocity" are flawed terms. If a civil conversation ever emerges from this mess, it might stand on that shred of common ground.

But I'm not optimistic, because Mr. Jeffries also says:
For my part in [promoting the term "velocity," and its use as a metric], I apologize. Meanwhile you’ll need to deal with the topic as best you can, because it’s not going away.
Apology accepted, and I'll give him credit for owning up to a mistake. But his attitude's terrible. If velocity's not good, you should get rid of it. And I don't need to deal with it, because it absolutely is going away.

Scrum's a fad in software management, and all such fads go away sooner or later. The most embarassing part of this fracas was that, while my older followers took it seriously, my younger followers thought the whole topic was a joke. Velocity is, in my own working life, less "going away" than "already gone for years." My last full-time job with Scrum ended in 2008. Since then, I've had to deal with velocity for a grand total of four or five months, and they weren't all contiguous.

Mr. Jeffries continues:
Giles has gone to an organization that pretty much understands Agile...

I’m not sure just what they do at his new company, Panda Strike, but I hope he’ll find some good Agile-based ideas there, and I think probably he will. More than that, I hope Giles finds a fun way to be productive...
I'd agree Panda Strike "pretty much understands Agile," but that's why we regard Scrum with skepticism. We also understand several flaws in Agile that I'd guess Mr. Jeffries does not. And it should be obvious I already had "a fun way to be productive" before Scrum got in my way. Why else would I dis Scrum in the first place?

For the record, at Panda Strike, we do use some Agile ideas, but that's a different blog post.

I'm publishing this on my personal blog, not the Panda Strike blog, because this is my own social media snafu. On the Panda Strike blog, I've gone into more detail about Agile and Scrum, our skepticism, the compromises we'll sometimes make, the flaws we've found in the Agile Manifesto, and the things we still like about it (and, to a lesser extent, Scrum). That blog post is more balanced than my personal attitude on my personal blog.

My post on the Panda blog is a response to the coherent criticism I've received. Please do read it. I hope you enjoy it. But here I'm responding to incoherent criticism from a high-profile source. Sorry - the man has sixteen thousand Twitter followers, and every one of them seems to want to bother me.



Also, a point of order: I am not "Giles" to you, Mr. Jeffries. I am "Mr. Bowkett" or "Mr. Bowkett, sir." Those are your only choices. You don't know me, sir. Mind your manners, please, because I'd like to use my own without feeling put upon. And your repeated claim of being "sad" on my behalf would be creepy if it seemed plausible, but it does not. It reads as insincere, so maybe you could just stop.

Ignored Tweets


My blog post argued that Scrum practices decay rapidly into non-Agile practices. Weirdly, most of my critics have repeatedly stated that my argument was not valid because I provided examples of Scrum practices which had decayed into less Agile versions.

Mr. Jeffries was one such critic. During my Twitter argument with him, I reminded him of this point repeatedly. Here's one of several such reminders:

Even after this exchange, he continued to hammer the idea that my argument – that Scrum's Agile practices decay rapidly – is invalid because the decayed versions of these practices are not Agile.
If people miss the point every time, I didn't make my point clear. But if they ignore clarification, they're arguing in bad faith.

Mr. Jeffries's Terrible Reading Comprehension May Not Be A Fluke


One of Mr. Jeffries's Twitter followers, a Scrum consultant named Neil Killick, wrote a blog post criticizing my post also:
A controversial blog post... suggests that Scrum is broken for many reasons, but one of those reasons cited is that the 15 minute time box for Daily Scrum (aka Standup) is too short to allow for meaningful conversations, and promotes a STFU culture.
This is false.

The phrase "shut the fuck up" occurred in a discussion of planning poker, not daily standups. This concept of "a STFU culture" is Mr. Killick's invention. The time-boxing in standups might undermine the Agile principle that "the most efficient and effective method of conveying information to and within a development team is face-to-face conversation" – it's certainly odd to find this "inhibiting conversation" theme in an "Agile" methodology – but a major theme in my post was that standups frequently go too long.

Having failed to grasp this, and possibly without ever reading my blog in the first place, Mr. Killick then explains how to solve a problem I never had:
IMO, in the situation the author describes, Scrum has exposed that perhaps the author and his colleagues feel their time together in the morning is so precious that they resent ending it artificially.

As the Scrum Master in this situation I would be trying to find out why they are not able to continue talking after the Daily Scrum? Perhaps they struggle to find the time or will to collaborate during the working day, so being forced together by the Daily Scrum gives them an opportunity that they do not want to cut short?
So, recap.

I write a blog post about how Scrum's a waste of time. I get two blog posts defending Scrum. Neither of the individuals who wrote these posts showed enough respect for my time to read the post they were responding to. Both of them say inaccurate things about my post. Both of them then offer me advice about how to solve problems I never had. And this is their counterargument to my claim that Scrum is a waste of time.


A Quick Defense Of Some Innocent Bystanders


Many Scrum defenders have said things to me like "if your company had a problem with Scrum, it was a messed-up company," or even "if your team had a problem with Scrum, they were incompetent."

On behalf of my former co-workers and former managers, fuck you.



One of the companies I mentioned was indeed very badly mismanaged, yet I worked with terrific people there. There were some flaws at the other, too, but it was much better. There were plenty of good people, nothing really awful going on, and Scrum still decayed.

(In fact, standups decayed to a similar failure state in each case, even though the circumstances, and the paths of deterioration, were very different.)

I saw these problems at two companies with a serious investment in Scrum, and I saw echoes of these problems elsewhere too. It's possible that I just fell into shit company after shit company, and the common theme was pure coincidence. But I doubt it. And if your best argument relies on coincidence, and boils down to "maybe you just suck," then you're not coming from a strong position.

Is that even an argument? Or is it just a silencing tactic?

The first time or two that I saw Scrum techniques fail, my teams were using them informally. I thought, "maybe we should be using the formal, complete version, if we want it to work." The next time I saw Scrum techniques fail, we got official Scrum training, but the company was already being mismanaged, so I thought, "maybe it doesn't matter how full or correct our implementation is, if the people at the top are messing it up." The next time after that, management was better, and the implementation was legit, but we were using a cumbersome piece of software to manage the process. So I thought, "maybe Scrum would work if we weren't using this software."

Eventually, somebody said to me, "hey, maybe Scrum just doesn't work," and it made more sense than any of these prior theories.

And Scrum's answer to this is "maybe you just suck"?

Maybe.

Maybe not.

Valid Critiques Addressed Elsewhere


As I said, I got better opposing arguments than these, and I addressed them on the Panda Strike blog. Check it out if you're curious.

by Giles Bowkett (noreply@blogger.com) at March 07, 2015 08:40 AM

March 06, 2015

Decyphering Glyph

Deploying Python Applications with Docker - A Suggestion

Deploying python applications is much trickier than it should be.

Docker can simplify this, but even with Docker, there are a lot of nuances around how you package your python application, how you build it, how you pull in your python and non-python dependencies, and how you structure your images.

I would like to share with you a strategy that I have developed for deploying Python apps that deals with a number of these issues. I don’t want to claim that this is the only way to deploy Python apps, or even a particularly right way; in the rapidly evolving containerization ecosystem, new techniques pop up every day, and everyone’s application is different. However, I humbly submit that this process is a good default.

Rather than equivocate further about its abstract goodness, here are some properties of the following container construction idiom:

  1. It reduces build times from a naive “sudo setup.py install” by using Python wheels to cache repeatably built binary artifacts.
  2. It reduces container size by separating build containers from run containers.
  3. It is independent of other tooling, and should work fine with whatever configuration management or container orchestration system you want to use.
  4. It uses existing Python tooling of pip and virtualenv, and therefore doesn’t depend heavily on Docker. A lot of the same concepts apply if you have to build or deploy the same Python code into a non-containerized environment. You can also incrementally migrate towards containerization: if your deploy environment is not containerized, you can still build and test your wheels within a container and get the advantages of containerization there, as long as your base image matches the non-containerized environment you’re deploying to. This means you can quickly upgrade your build and test environments without having to upgrade the host environment on finicky continuous integration hosts, such as Jenkins or Buildbot.

To test these instructions, I used Docker 1.5.0 (via boot2docker, but hopefully that is an irrelevant detail). I also used an Ubuntu 14.04 base image (as you can see in the docker files) but hopefully the concepts should translate to other base images as well.

In order to show how to deploy a sample application, we’ll need a sample application to deploy; to keep it simple, here’s some “hello world” sample code using Klein:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# deployme/__init__.py
from klein import run, route

@route('/')
def home(request):
    request.setHeader("content-type", "text/plain")
    return 'Hello, world!'

def main():
    run("", 8081)

And an accompanying setup.py:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
from setuptools import setup, find_packages

setup (
    name             = "DeployMe",
    version          = "0.1",
    description      = "Example application to be deployed.",
    packages         = find_packages(),
    install_requires = ["twisted>=15.0.0",
                        "klein>=15.0.0",
                        "treq>=15.0.0",
                        "service_identity>=14.0.0"],
    entry_points     = {'console_scripts':
                        ['run-the-app = deployme:main']}
)

Generating certificates is a bit tedious for a simple example like this one, but in a real-life application we are likely to face the deployment issue of native dependencies, so to demonstrate how to deal with that issue, that this setup.py depends on the service_identity module, which pulls in cryptography (which depends on OpenSSL) and its dependency cffi (which depends on libffi).

To get started telling Docker what to do, we’ll need a base image that we can use for both build and run images, to ensure that certain things match up; particularly the native libraries that are used to build against. This also speeds up subsquent builds, by giving a nice common point for caching.

In this base image, we’ll set up:

  1. a Python runtime (PyPy)
  2. the C libraries we need (the libffi6 and openssl ubuntu packages)
  3. a virtual environment in which to do our building and packaging
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
# base.docker
FROM ubuntu:trusty

RUN echo "deb http://ppa.launchpad.net/pypy/ppa/ubuntu trusty main" > \
    /etc/apt/sources.list.d/pypy-ppa.list

RUN apt-key adv --keyserver keyserver.ubuntu.com \
                --recv-keys 2862D0785AFACD8C65B23DB0251104D968854915
RUN apt-get update

RUN apt-get install -qyy \
    -o APT::Install-Recommends=false -o APT::Install-Suggests=false \
    python-virtualenv pypy libffi6 openssl

RUN virtualenv -p /usr/bin/pypy /appenv
RUN . /appenv/bin/activate; pip install pip==6.0.8

The apt options APT::Install-Recommends and APT::Install-Suggests are just there to prevent python-virtualenv from pulling in a whole C development toolchain with it; we’ll get to that stuff in the build container. In the run container, which is also based on this base container, we will just use virtualenv and pip for putting the already-built artifacts into the right place. Ubuntu expects that these are purely development tools, which is why it recommends installation of python development tools as well.

You might wonder “why bother with a virtualenv if I’m already in a container”? This is belt-and-suspenders isolation, but you can never have too much isolation.

It’s true that in many cases, perhaps even most, simply installing stuff into the system Python with Pip works fine; however, for more elaborate applications, you may end up wanting to invoke a tool provided by your base container that is implemented in Python, but which requires dependencies managed by the host. By putting things into a virtualenv regardless, we keep the things set up by the base image’s package system tidily separated from the things our application is building, which means that there should be no unforseen interactions, regardless of how complex the application’s usage of Python might be.

Next we need to build the base image, which is accomplished easily enough with a docker command like:

1
$ docker build -t deployme-base -f base.docker .;

Next, we need a container for building our application and its Python dependencies. The dockerfile for that is as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# build.docker
FROM deployme-base

RUN apt-get install -qy libffi-dev libssl-dev pypy-dev
RUN . /appenv/bin/activate; \
    pip install wheel

ENV WHEELHOUSE=/wheelhouse
ENV PIP_WHEEL_DIR=/wheelhouse
ENV PIP_FIND_LINKS=/wheelhouse

VOLUME /wheelhouse
VOLUME /application

ENTRYPOINT . /appenv/bin/activate; \
           cd /application; \
           pip wheel .

Breaking this down, we first have it pulling from the base image we just built. Then, we install the development libraries and headers for each of the C-level dependencies we have to work with, as well as PyPy’s development toolchain itself. Then, to get ready to build some wheels, we install the wheel package into the virtualenv we set up in the base image. Note that the wheel package is only necessary for building wheels; the functionality to install them is built in to pip.

Note that we then have two volumes: /wheelhouse, where the wheel output should go, and /application, where the application’s distribution (i.e. the directory containing setup.py) should go.

The entrypoint for this image is simply running “pip wheel” with the appropriate virtualenv activated. It runs against whatever is in the /application volume, so we could potentially build wheels for multiple different applications. In this example, I’m using pip wheel . which builds the current directory, but you may have a requirements.txt which pins all your dependencies, in which case you might want to use pip wheel -r requirements.txt instead.

At this point, we need to build the builder image, which can be accomplished with:

1
$ docker build -t deployme-builder -f build.docker .;

This builds a deployme-builder that we can use to build the wheels for the application. Since this is a prerequisite step for building the application container itself, you can go ahead and do that now. In order to do so, we must tell the builder to use the current directory as the application being built (the volume at /application) and to put the wheels into a wheelhouse directory (one called wheelhouse will do):

1
2
3
4
5
$ mkdir -p wheelhouse;
$ docker run --rm \
         -v "$(pwd)":/application \
         -v "$(pwd)"/wheelhouse:/wheelhouse \
         deployme-builder;

After running this, if you look in the wheelhouse directory, you should see a bunch of wheels built there, including one for the application being built:

1
2
3
4
5
6
$ ls wheelhouse
DeployMe-0.1-py2-none-any.whl
Twisted-15.0.0-pp27-none-linux_x86_64.whl
Werkzeug-0.10.1-py2-none-any.whl
cffi-0.9.0-py2-none-any.whl
# ...

At last, time to build the application container itself. The setup for that is very short, since most of the work has already been done for us in the production of the wheels:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# run.docker
FROM deployme-base

ADD wheelhouse /wheelhouse
RUN . /appenv/bin/activate; \
    pip install --no-index -f wheelhouse DeployMe

EXPOSE 8081

ENTRYPOINT . /appenv/bin/activate; \
           run-the-app

During build, this dockerfile pulls from our shared base image, then adds the wheelhouse we just produced as a directory at /wheelhouse. The only shell command that needs to run in order to get the wheels installed is pip install TheApplicationYouJustBuilt, with two options: --no-index to tell pip “don’t bother downloading anything from PyPI, everything you need should be right here”, and, -f wheelhouse which tells it where “here” is.

The entrypoint for this one activates the virtualenv and invokes run-the-app, the setuptools entrypoint defined above in setup.py, which should be on the $PATH once that virtualenv is activated.

The application build is very simple, just

1
$ docker build -t deployme-run -f run.docker .;

to build the docker file.

Similarly, running the application is just like any other docker container:

1
$ docker run --rm -it -p 8081:8081 deployme-run

You can then hit port 8081 on your docker host to load the application.

The command-line for docker run here is just an example; for example, I’m passing --rm so that if you run this example just so that it won’t clutter up your container list. Your environment will have its own way to call docker run, how to get your VOLUMEs and EXPOSEd ports mapped, and discussing how to orchestrate your containers is out of scope for this post; you can pretty much run it however you like. Everything the image needs is built in at this point.

To review:

  1. have a common base container that contains all your non-Python (C libraries and utilities) dependencies. Avoid installing development tools here.
  2. use a virtualenv even though you’re in a container to avoid any surprises from the host Python environment
  3. have a “build” container that just makes the virtualenv and puts wheel and pip into it, and runs pip wheel
  4. run the build container with your application code in a volume as input and a wheelhouse volume as output
  5. create an application container by starting from the same base image and, once again not installing any dev tools, pip install all the wheels that you just built, turning off access to PyPI for that installation so it goes quickly and deterministically based on the wheels you’ve built.

While this sample application uses Twisted, it’s quite possible to apply this same process to just about any Python application you want to run inside Docker.

I’ve put a sample project up on Github which contain all the files referenced here, as well as “build” and “run” shell scripts that combine the necessary docker commandlines to go through the full process to build and run this sample app. While it defaults to the PyPy runtime (as most networked Python apps generally should these days, since performance is so much better than CPython), if you have an application with a hard CPython dependency, I’ve also made a branch and pull request on that project for CPython, and you can look at the relatively minor patch required to get it working for CPython as well.

Now that you have a container with an application in it that you might want to deploy, my previous write-up on a quick way to securely push stuff to a production service might be of interest.

(Once again, thanks to my employer, Rackspace, for sponsoring the time for me to write this post. Thanks also to Shawn Ashlee and Jesse Spears for helping me refine these ideas and listening to me rant about them. However, that expression of gratitude should not be taken as any kind of endorsement from any of these parties as to my technical suggestions or opinions here, as they are entirely my own.)

by Glyph at March 06, 2015 10:58 PM

Mark Bernstein

Progress

A masterful review of the history of the Gamergate Affair at Wikipedia, by Lauren C. Williams at Think Progress: The ‘Five Horsemen’ Of Wikipedia Paid The Price For Getting Between Trolls And Their Victims.

March 06, 2015 02:11 PM

Fog Creek

Knowing When to Stop – Tech Talk

 

There comes a point in every instance of creation when the creator steps back and says, “Done.” But how do you know when a thing is complete? And what happens when you continue past that point?

In this short Tech Talk, Matt, a System Administrator here at Fog Creek, using examples from Computer Science, Finance, and Art, explores different perspectives on this question. It acts as a cautionary tale for anyone involved in software development about the dangers of feature creep and not knowing what done looks like.

 

About Fog Creek Tech Talks

At Fog Creek, we have weekly Tech Talks from our own staff and invited guests. These are short, informal presentations on something of interest to those involved in software development. We try to share these with you whenever we can.

 

Content and Timings

  • Computing (0:19)
  • Gambling and Finance (0:48)
  • Art (1:30)
  • Software (3:22)
  • Examples of Feature Creep (4:50)

 

Transcript

We often lose track of what does done look like and when do we reach it and say, “Enough. Working on this more is not going to help anything.” When you start thinking about this… well, we’ll start with computing.

Computing

Computing has a well-defined halting problem. In computability theory, the problem of determining from description of an arbitrary computer program input whether the program will finish running or continue to run forever. This is like basic, way-down, Comp Sci stuff, where does this input associated with this algorithm actually lead and does it meet a condition or does it loop forever. When you start making this more of a human aspect, you wind up with a kind of the mathematical halting problem or the gambler’s dilemma.

Gambling and Finance

The gambler is trying to maximize their profit. They’re a min-maxer; they want to invest the least possible amount and gain the most possible amount. I’ll include finance in here, which is a kind of sophisticated gambling with a little bit more information. The notion is that you have tools, mathematical tools, that you can choose to apply to this problem to figure out what the optimal solution can be. There are tools for this. You can actually sit down with a spreadsheet or some piece of software or a pencil and paper and figure out the proper and optimal solution for this problem. For the creative, you stop when the expression of the idea is complete. When the hell is that?

Art

Well, there are very few mathematical models you can apply to figure out when your watercolor of a beautiful waterfall is finished. There are some boundaries, which are established by the medium. If you’re doing sculpture and you’re chipping away, eventually there’s not going to be any stone to chip away anymore and you are going to have to stop because there’s nothing left, literally. If you are a painter, or a sculptor in clay you can continue to add clay, but eventually gravity is going to take over and say, “all right, you can’t really put that newfangled nose on this statue. It just doesn’t have the support.” There are realities that do apply themselves to certain media. The Statue of Liberty can’t necessarily hold her arms straight out or have forearms because the structural architectural notions of that beautiful thing out in the harbor just doesn’t support that, like Michelangelo up there. “I saw the angel in the marble and carved until I set him free. Every block of stone has a statue inside it and it is the task of the sculptor to discover it.” Michelangelo has this concept of the finished artifact in his head; he’s trying to liberate it from the medium. Ultimately, he knows what he’s looking for. Maybe he doesn’t know specifics, but he’s going to keep trying to pull it out until you wind up with something like this. There’s a more contemporary quote: “Knowing just when to declare a work of art finished is an eternal struggle for many artists. The issue is that if you don’t work on a piece enough, the work can come across as incomplete. On the other hand, overworking a piece can cause the work to appear tired and tedious. The most compelling works of art throughout history are able to establish a strong balance of gesture and spontaneity while simultaneously appearing to be substantial and fully resolved.” Much more fuzzy, no maths going into this. I’m done when I think it’s done.

Software

Then we get to Software. We’ve kind of come full circle. We started with the computing halting problem, we came all the way through art, which is one of the most open, creative processes, and now we’re here at software, which is even more open than art. Yes, the machine implies certain things for you based on the way it acts and the way you can use it, but ultimately you can create literally fantastic worlds inside a machine. When you don’t know when to stop with software, you start suffering what’s called feature creep. Jamie Zawinsky was a programmer for Netscape, and he came up with Zawinsky’s law, which is, “every program attempts to expand until it can read e-mail. Those programs which cannot so expand are replaced by ones which can.” There’s this notion that applications are pushed well beyond their requirements and intentions. You have a program, you’re trying to solve a problem, and then you fall in love with the thing in the process of making it. Then you start thinking, “Well, what else can I add? What else can I do? Where else can I go? Oh, well this function isn’t very good; it doesn’t have that je ne sais quoi I was looking for when I was writing it. I’m going to go back and rewrite it. Oh, there’s a new library; I could re-implement this in another language. I could do this, that and the other.” You can just fall into this hole and get stuck there and never know when done is done, because you’ve lost sight of what you were originally even intending and what the finished state looked like, if you knew what it was in the beginning.

Examples of Feature Creep

This is what feature creep looks like in one image. This is Microsoft Word 2010 with every toolbar enabled; that is all you get to type in. Yes, some people might use these things. Yes, that is an interface to get people to be able to use those things, but, you have long gone past the notion of laying out documents. Here’s another example, and it’s kind of a case study, I think, in not knowing when to stop because you don’t know what it is that you’re trying to come up with.

They had a team, they were in Australia, they were isolated from the rest of the Google environment, and they spent two years working on this thing in isolation, effectively. That’s what it looked like. The paradigm they said they wanted to change was e-mail, and they said they wanted to re-implement e-mail for the 21st century. Nobody knew what Wave was or where it fit in, but it was supposed to… they got so attached to this thing that they kept adding more and more crap to it that it ceased to be e-mail for the 21st century. It turned into this communications hub. They ate so much dog food that it was poisoning them. They spent their entire life in Wave. All of their internal communications, all of their internal documentation, all of the rest of their stuff was all in Wave, and they just expected everybody to do that. This was the central focus of their working life. Whenever they encountered a thing like, “oh, I wish I could send a tweet from within Wave,” or “I wish I could read this RSS feed from within Wave, because I’m always in Wave and I want to be able to do these things,” they kept making more and more and more and more and more complex. Two years later, when they finally crack open the box and join the rest of the world, they have this monstrosity that the only way to use it successfully is to basically have it take over your life and then do all of these things within the context of Wave, which is not what people want.

The question is, ultimately, when do we stop? The answer is when it’s done, which is kind of a cop-out. Because if you don’t figure out what done looks like when you start, you’ll never figure it out along the way. We stop when it’s done. Figuring out what done is is the problem.

by Gareth Wilson at March 06, 2015 11:43 AM

Blue Sky on Mars

Fired

So I got fired from GitHub two weeks ago.

This is the tweet I wrote afterwards while walking to lunch:

I left it ambiguous because I wasn't sure what to think about it all at that point. I mean, something I'd been doing for half a decade was suddenly gone. If people knew what really happened, would they view me differently? Would they — gulp — view me as a failure? Please, please suggest that I eat babies or microwave kittens or something else far more benign, but, for the love of god, don't suggest I'm a failure.

GitHub's very first office

Do you know what happens after you leave a job you've been at for five years? You get drunk a lot, because everyone you know hits you up and says hay man, let's def get a drink this week and you can't really say no because you love them and let's face it, you suddenly have a lot of free time on your hands now and they're obviously the ones buying.

Do you know what happens after you tell them the whole story behind your termination? Virtually everyone I've had drinks with tells a similar story about the time they got fired. Seemingly everyone’s got stories of being stuck under shit managers, or dealing the fallout from things out of their control. Maybe this is selection bias and all of my friends are horrific at their jobs, but there's an awful lot of them that I'd consider to be best-in-industry at what they do, so I think it’s more likely that the answer is: It Was Probably Complicated™.

I've been pretty fascinated by this lately. Getting fired is such a taboo concept: nobody will want to hire me if they knew I got fired from my last gig. And yet, certainly a lot of people do get fired, for a lot of different reasons. Even if you spend years doing something well, when it comes to work, people tend to focus predominantly on your last month rather than the first 59.

This isn't the case with love, to choose a cliché metaphor. If you break up with somebody, it just wasn't meant to be is the catch-phrase used rather than yo you must have been a shit partner to not have forced that relationship to work out.

House of Shields

Working is such a spectrum. The company can change a lot while you're there, and you can change a lot while you're there. Sometimes the two overlap and it's great for everyone, and sometimes they don't. But I'm not sure that's necessarily an indictment of either the employer or the employee. It just is.

Unless you're embezzling money or using the interns as drug mules or something. Then yeah, that's probably an indictment waiting to happen, and your termination really does reflect something deeply flawed in your character.

Do you know what happens when you're reasonably good at your job and you suddenly leave the company? Everybody asks what's next? What do you have lined up next? Excited to see what's next for you! That's nice and all, but shit, I barely even know what I'm eating for lunch today. Or how the fuck COBRA works (to the best of my current knowledge it has nothing to do with G.I. Joe, either the action figures or the TV show, much to my dismay).

Part of the problem with not admitting publicly that you were fired is that people inevitably assume you're moving onto something bigger and better, which ironically is kind of a twist of the knife because you don't have something bigger and better quite yet. (And no, I'm not working at Atlassian, so stop tweeting me your congratulations, although I’m still cracking up about that one.)

About half of the early crew

But hey, I mostly just feel lucky. I’m lucky I got to work at the company of my dreams for five years. I got to meet and work with the people in the industry I admired the most, and, even more than that, I somehow am lucky enough to call them “friends” now. I got hired by dumb luck — I still consider it an objective mistake for them to have hired me, back then — and I was able to grow into a myriad of roles at the company and hopefully help out more than a few people across our industry. I left on great terms with leadership and the people I worked with. I feel like I can be pretty proud about all of that.

So basically what I’m saying is that I’m really excited to hear what I do next.

March 06, 2015 12:00 AM

March 05, 2015

Fog Creek

How Low Should You Go? Level of Detail in Test Cases

limbo (1)
It can be difficult to know just how much detail you should include in your test documentation and Test Cases, in particular.

Each case has a different set of needs and requirements. Who is going to use it, how often and for what purpose poses many considerations.

If it’s written at too high a level, then you’re leaving it open to too much interpretation and you risk the wrong things being tested. Too low, and you’re just wasting your own time. It makes it more difficult to maintain and there’s an opportunity cost to other projects with demands on your time.

In this post, we break down some of the factors you should consider helping you find the right level.

Understand the Wider Context

Each of your project’s stakeholders will have concerns that will impact the amount of detail you need. From your organization’s internal politics and appetite for risk to the extent to which the product is relied upon etc. This will provide the wider context for your test cases and start to inform your thinking. The documentation expectations at a lean startup will differ greatly to that at some financial institution, for example.

Test Requirements and Resources

You need to at least provide enough information to describe the intent of the test case. This should make clear all elements that need to be tested. So special consideration should be given to any specific input values or a particular sequence of actions etc.

The amount of time you have to create the test case and the human or IT resources you have to enact the tests is another obvious key factor.

test_Case (1)

Know Your Audience

Consider the audience for the case too. How technical are they? How much product knowledge do they have and how experienced at testing are they? More experienced testers, familiar with the product will need less detail. But is the team likely to change in the foreseeable future? If so, then you might want to head off later re-writes by providing extra detail now for those with less experience.

Some organizations have specific requirements to provide evidence of test coverage. Usually for compliance, to show adherence to a standard or certification, or for other legal issues.

Test and Product Considerations

Each test is different, from the importance of the test, to how long it will be in use for. If it’s likely to be converted to an automated test script in the future, then including more detail now might make that easier to do. There are similar considerations about the product you’re testing. Will the application be used in the long-term? And whereabouts is it in its lifecycle? The amount of changes that you can expect for a recently built, agile application are far greater than for some old system you’re maintaining. Unless it’s a wild, testless code beast that is.

There’s a Balance to be Found

These factors don’t necessarily mean you should include more detail. But, important and long-lasting tests justify the time if needed. However, there’s a balance to be sought. If you create highly specific tests, then even minor design changes or functionality alterations may mean you have to re-write cases. They also lead testers to raise bugs for what end up being problems with your test documentation, rather than those impacting customers. They can have a knock-on effect too. They encourage the tester to only consider the specific paths through the application detailed in the case. Meaning they might not consider the functionality more generally.

There’s no silver bullet for coming to a conclusion. Each organization’s requirements differ. And these requirements change depending on the project, product and the individual test too. But by thinking through the above factors, then you can find a level that works for you and your team.

by Gareth Wilson at March 05, 2015 04:50 PM

Dave Winer

JavaScript in-browser almost complete

As you may know, I've become a JavaScript-in-the-browser developer. My liveblog is an example of that. It's an app, just like the stuff we used to develop for Mac and Windows, but it runs in the browser.

The browser is a complete app environment except for one crucial piece: storage. It has a simple facility called localStorage, which almost fits the bill, comes close, but ultimately doesn't do what people want.

I have solved the problem in a generic and open source way. In a very popular server platform, Node.js. However it's not widely known that this problem has been solved.

Try this little app, as a demo: http://macwrite.org/.

You can sign in, write some text, save it, sign out.

And then sign in from a different machine, and voila, the text you entered is there.

From that little bit of functionality you can build anything.

I have a new app in development, very simple, and brain-dead obvious, and useful, that builds on this. Hopefully at that point the lights will start to come on, oh shit, we're ready to build the next layer of the Internet. It really is that big a deal. And you don't need VC backing to participate. One developer, one person, can build something useful in a week. I've just done that myself. The service will virtually run itself at almost no cost, for a lot of users. That's an interesting place to be.

March 05, 2015 01:54 PM

March 04, 2015

Tim Ferriss

TF-StitcherButton

kelly couch stretch - spartan pose

This post delves into the good, the bad, and the ugly of all things CrossFit.  It answers many important questions, including:

- What are the 3 most dangerous exercises in CrossFit gyms?
– What are the most common nutritional mistakes of CrossFit athletes?
– What do elite CrossFit athletes do differently than the rest? Example: How do Rich Froning and Jason Khalipa warm up?
– Is the CrossFit Games really CrossFit?
– Is CrossFit a fad?
– What is the future of CrossFit?

The man to answer all this (and much more) is Kelly Starrett.  He’s trained CrossFit athletes for more than 130,000 hours (!) and 10 years at San Francisco CrossFit, which opened in 2005 as one of the first 50 CrossFit Affiliates in the world. There are now more than 10,000 Affiliates worldwide.

Kelly’s clients include Olympic gold medalists, Tour de France cyclists, world record holders in Olympic lifting and powerlifting, Crossfit Games medalists, professional ballet dancers, and elite military personnel.

Even if you have zero interest in CrossFit, this conversation invites you inside the mind of one of the world’s top coaches.  Kelly discusses habits, strategies, and thinking that can be applied to nearly everything.

As a bonus, I’ve also included our first conversation below, which includes disgusting amounts of alcohol, my personal doctor, and our tactics for becoming the guy from Limitless.

TF-ItunesButton TF-StitcherButton

Plus, the booze-enhanced episode on all things performance enhancement (stream below or right-click here to download):

This podcast is brought to you by Mizzen + Main. Mizzen + Main makes the only “dress” shirts I now travel with — fancy enough for important dinners but made from athletic, sweat-wicking material. No more ironing, no more steaming, no more hassle. Click here for the exact shirts I wear most often. Order one of their dress shirts this week and get a Henley shirt (around $60 retail) for free.  Just add the two you like here to the cart, then use code “TIM” at checkout.

This episode is also brought to you by 99Designs, the world’s largest marketplace of graphic designers. Did you know I used 99Designs to rapid prototype the cover for The 4-Hour Body? Here are some of the impressive results.

QUESTION(S) OF THE DAY: If you had to pick one sport or weightlifting movement for the rest of your life, what would it be and why? Please share and explore answers in the comments here.

Do you enjoy this podcast? If so, could you please leave a short review here? I read them, and they keep me going. Thanks!

by Tim Ferriss at March 04, 2015 11:56 PM

Mark Bernstein

Syllabus

The first appearance of “Infamous” in a college syllabus: ITP Core 2 at CUNY, taught by Lisa Brundage and Michael Mandiberg. Interesting readings, and a prodigious set of graduate students!

March 04, 2015 05:08 PM

Whispers and Cries

Marcus Zarra laments the Dangers Of Misinformation, specifically when software developers share specific experiences that are later generalized so broadly that the overall impression is false.

For example, years ago Brent Simmons pointed out problems that were leading him to leave Core Data behind. Looking back, he says

The response to that post continues to amaze me. I come right out and say, multiple times, that you should use Core Data. And yet it’s used as a thing about how Core Data sucks.

Caves and Enclaves

Lots of software developers work in caves, using the tools and techniques they already know and adding whatever they’re required by circumstance to acquire. When they are at work they’re not at home, and when they’re at home they’re not reading journals or immersing themselves in books about software. So, when it comes to new technology, they rely on occasional hints they see or hear.

Other software developers work in enclaves – companies and clusters of companies that share a technical base and a technical attitude. Again, the common wisdom in an enclave gets formed by the bellwethers, and often that process of wisdom-formation is erratic.

Our problem is simply that lots of new ideas are actually bad ideas; things that ComputerWorld and TechCrunch tell you are the Big New Thing are sometimes yesterday’s thing and often nothing at all. Sometimes, a new system rolled out at WWDC will make your life better if you adopt it right away; sometimes, it’s going to make everyone miserable unless you wait a year or two for the dust to settle. (Years ago, Apple had a lovely technology called OpenDoc to which we made a big commitment. It was The Future. Then, one day, it was Cancelled. No more. Nice product you had there…)

Right now, Joel Spolsky’s Stack Exchange plays a crucial role in linking up caves and enclaves. It’s a technical forum, and it’s often astonishingly good: you search on the ridiculous error message that makes no sense that that you’ve certainly never seen before, and voila there’s someone else who reported exactly the same message last week, and explains who sent it and why. But lots of people on Stack Exchange don’t know what they’re talking about, and lots of them don’t have a very solid grasp of English, and so there’s also a fair amount of noise.

The Way Out

One good way out of this bind is simply to have better contacts.

Planning Tinderbox Six, I was guided by a number of warnings that Objective C++ was slow, poorly supported, doomed, or otherwise a Bad Idea. The problem was, Objective-C simply doesn't support a number of idioms on which Tinderbox relies. So I started to ask around: could we use a little Objective C++? Could we use it briefly as a transitional mechanism?

I asked lots of people who have solid Mac products and lots of experience, and the answer came back: “people say it’s a bad idea, but it’s not.”

“Are you sure?” I asked them. “Everyone says…”

“Everyone is an idiot. We’ve done everything that way for a couple of years.”

The Better Way Out

Make mistakes. Accept that code will be thrown away. Wrong turns aren’t a waste: they tell you where you didn’t want to go, and give you an idea of where you might head another time.

Find a way to be more at home with your work, and to work when you’re at home because it’s natural to do what you do. You can live with alienation, but you don’t want to.

Don’t trust the common wisdom of your technical enclave too far. Stand up, speak out, judge for yourself, and be ready to change your mind.

March 04, 2015 03:38 PM

Fog Creek

Lightweight Software Architecture – Interview with Simon Brown

.little {font-size: 75%}

Looking for audio only? Listen on

 

We’ve interviewed Simon Brown, a Software Architecture consultant. We discuss his lightweight approach to software architecture, expounded in his book ’Software Architecture for Developers’. We cover why software architects sometimes get a bad rep, the importance of software architecture in software projects, how even Agile teams can adopt his techniques and how much is ’just enough’ upfront design.

He writes about topics like this and there’s more resources on his website.

 

Content and Timings

  • Introduction (0:00)
  • About Simon (0:24)
  • A Bad Rep (1:45)
  • Importance of Software Architecture (2:38)
  • Agile Development and Software Architecture (3:29)
  • Just Enough Upfront Design (5:34)
  • Common Mistakes of New Software Architects (6:40)
  • Recommended Resources (7:20)

 

Transcript

Introduction

Derrick:
Today we have Simon Brown. Simon is a consultant specializing in software architecture based in Jersey and the Channel Islands. He speaks at conferences around the world and writes about software architecture, and is the author of Software Architecture for Developers. Simon thank you so much for joining us today, we really appreciate it. Why don’t you say a little bit about yourself?

About Simon

Simon:
Thank you much for having me. As you said I’m an independent consultant and my background is actually working in finance so I spent about twelve years working London, mostly building software for the finance industry. And I moved back here to Jersey about 6 years ago, mostly because I have family and it’s a fantastic place to live.

The past couple of years I’ve been doing lots of jetting around the world and really teaching people about software architecture and how to adopt a modern, lightweight pragmatic approach to software architecture.

Derrick:
Well I wanted to touch base on the book, Software Architecture for Developers. What made you want to write the book?

Simon:
I’ve written some Java books in the past, and I’ve gone through the developer to architect role myself. When I did that, I found it quite hard to understand what I should be doing as an architect. There’s lots and lots of books out there for developers, and ironically there’s lots of content and material for architect, but it’s a huge jump. You’ve got all the code stuff down here and all the really high-level enterprise architecture stuff.

When developers are trying to move into an architecture role, you get thrown into all this TOGAF and Zachman in the enterprise architecture world, so I wanted something to bridge that gap and basically make software architecture much more accessible to software developers. That’s it in a nutshell.

A Bad Rep

Derrick:
You mention in the book that some regard software architects as “box-drawing hand wavers”, why do you think software architecture has such a bad rep with some people?

Simon:
If you look bad ten years, maybe further, we had a very prescriptive way of working in things like Waterfall where you’d hire an architect. So the architect would come in, do all the requirements gathering, put together the architecture, and then they would wield this big hefty set of documents down to the development team. Often those people never got involved with the coding, they never got feedback on their designs and that’s where a lot of the bad reputation has come from. So what I’m trying to do is take a much more hands on, pragmatic, lightweight approach, where architects are part of the team and architects are writing code as well. So we’re trying to move away from the very fluffy, conceptual, hand-waving style of architecture.

Importance of Software Architecture

Derrick:
You say that all software projects need software architecture, why do you think it’s so important?

Simon:
It’s important because software architecture for me really solves two categories of problems. Category number one is really about building something that has a nice structure. If architecture is about structure, if you don’t think about structure, you basically don’t get structure. This is why we get lots of systems involved in that horrible big ball of mud we’ve all heard of, where everything is really interconnected and inter-tangled. So that’s one category of problems that we’re trying to solve there. And the other is really making architecture systems that work specifically from a non-functional perspective, so making sure that we factor in the important quality attributes. You know, performance if it’s important, scalability, security, and so on and so forth. And really if you look at architecture like that then software of all sizes needs structure and it needs to work. So that’s why architecture is pretty useful to all software projects.

Agile Development and Software Architecture

Derrick:
Software architecture and Agile are often seen as being mutually exclusive, but you think otherwise. How can the two better co-exist?

Simon:
It comes back to these whole conflicts that we seem to have had in the past. Again, if you look back ten, twenty years, it’s very Waterfall, very prescriptive, process-driven, do lots of design up front. And then Agile came along, ten, twelve years ago, and it kind of reversed the trends. A lot of people’s interpretations of Agile equals “don’t do any up front design.” So we’ve got two extremes here, and there’s a sweet spot in the middle there. It’s about doing just enough up front design to satisfy the major questions.

Derrick:
But for those green start-ups, they barely have a functional spec around, how can they start to incorporate some of your thinking in to their working practices?

Simon:
So this is that million dollar question, isn’t it? How do we do some thinking up front, but not get taken down this path of just doing it forever? It’s worth saying that a good architecture actually enables agility. If you’re building a lean start-up, and you want to pivot and change quickly, then you need to make sure that’s one of your quality attributes that you consider. Adaptability is a thing you need to bake in to your architecture.

In terms of how lean start-ups adopt lightweight architecture practices, for me it’s very simply about doing just enough up front design. I have a model I talk about in the book called the C4 Model. Basically it’s about going down to component design, nice high-level, chunky components, and then layering on risks and things like that.

You could argue that with a lean start-up you don’t have any requirements, and therefore, how can you possibly come up with a list of candidate components if you’ve got no requirements driving that process. In that case, I think you need to adopt principle-driven design. In other words, identify the list of principles that you and your team want to follow around layering strategies and technology choices, and all of those kind of things, and write them down, and then make sure that your architecture follows those principles.

Just Enough Upfront Design

Derrick:
You mentioned “just enough” up front design. How much is enough?

Simon:
It’s about this much.

It’s one of those things, when you say it to some people they think it’s a complete breath of fresh air, and when you say it to a different set of people they think it’s a complete and utter cop out. For me, again, it’s in the middle. It’s not doing nothing and it’s not doing everything up front. So this begs the question – can you put some concrete advice about doing just nothing? And for me it’s very simple. It’s about doing enough in order to understand the structure of what you’re building, and also to be able to correct the vision that the rest of the team can understand to work from. That’s why, if you look at my book and all of my stuff online, you’ll see that a lot of the things I talk about relate to diagrams. I’m trying to bring back lightweight architecture diagrams as a way to make people think about structure, to a certain level of abstraction, and also use those diagrams to present and correct that shared vision. So that’s two-thirds of just enough.

The third process is risks. It’s about understanding, identifying and mitigating your highest priority risks. In other words, the things that are potentially going to cause your system, your project, to fail.

Common Mistakes of New Software Architects

Derrick:
What are some common mistakes you see those new to software architecture make?

Simon:
It’s a really complex role and there aren’t many people out there teaching people how to do this role. Everything from people doing architecture and design on their own, and never asking for feedback. It’s people assuming that they must have all the answers, they are The Architect and they must know everything, which is impossible. It’s not collaborating enough. It’s not being clear with your communication. It’s not applying the right level of leadership. It’s not making sure that you’re policing your architecture principles, it’s all of that stuff.

Recommended Resources

Derrick:
Can you recommend some resources for developers, or those new to the role, who want to learn more about successful software architecture?

Simon:
I’d recommend a couple of other books about architecture too. There’s a book by friends named Eoin Woods and Nick Rozanski and that’s called Software Systems Architecture. That takes you through lots of things including the more traditional view catalog. If you use Viewpoint Perspectives, so ways to think about and describe architecture.

There’s also a great book by another friend of mine, George Fairbanks. It’s called Just Enough Software Architecture. You can probably understand why I’m recommending that. And again, that’s a great book, George is a really super smart guy. Lots and lots of stuff in there about architecture. And again, he’s trying to bridge the ivory tower, very academic view of architecture with something a bit more grounded.

Derrick:
Thank you so much for joining us today.

Simon:
No problem, you’re welcome. Thank you.

by Gareth Wilson at March 04, 2015 10:59 AM

March 02, 2015

Reinventing Business

Abusing Unlimited Vacation

Unlimited vacation time has become quite the rage among companies. What's happened? Have industrial-age companies suddenly seen the light and realized the benefits of treating employees really well? As you might guess, the answer is no.

It turns out that social pressures around job security outweigh the desire and need to recover from work. Across the board, when companies adopt an unlimited vacation policy, employees take LESS time off. Human resources gets a win while looking like heroes for such a liberal policy!

But it gets better. If vacation time isn't fixed, it basically doesn't exist. It's not an accounting liability on the books. Best of all, if you quit, the company doesn't have to pay you for accrued vacation! Now companies are adopting unlimited sick time, presumably to add to the benefits of unlimited vacation.

Here's a great story about the failure of a well-intentioned unlimited vacation policy (along with numerous other problems with vacation systems, and links to other articles -- especially see Jacob Kaplan-Moss' analysis), and how they fixed it: by adding in a minimum required amount of yearly vacation (I can just feel the MBAs and HR folk squirming when they read that).

by noreply@blogger.com (Bruce Eckel) at March 02, 2015 06:51 PM

Greg Linden

More quick links

Some of the best of what I've been thinking about lately:
  • Great TED talk titled "The mathematics of love", but probably should be titled "A data analysis of love" ([1])

  • Manned submarines are about to become obsolete and be replaced by underwater drones ([1] [2] [3])

  • "No other algorithm scaled up like these nets ... It was a just a question of the amount of data and the amount of computations." ([1] [2])

  • What Google has done is a little like taking a person who's never heard a sound before, not to mention ever hearing language before, and trying to have them learn how to transcribe English speech ([1] [2])

  • Teaching a computer to achieve expert level play of old video games by mimicking some of the purpose of sleep ([1] [2])

  • "Computers are actually better at object recognition than humans now" ([1] [2] [3] [4])

  • The goal of Google Glass was a "remembrance agent" that acts as a second memory and gives helpful information recommendations in real time ([1] [2] [3])

  • A new trend, large VC investments in artificial intelligence ([1])

  • "Possibly the largest bank theft the world has seen" done using malware ([1])

  • "Users will prioritise immediate gain, and tend to dismiss consequences with no immediate visible effect" ([1] [2])

  • "Crowds can't be trusted". It's "really a game of spamfighting". ([1] [2])

  • SMBC comic: "All we have to do is build a trustworthiness rating system for all humans" ([1])

  • Dilbert describes most business books: "He has no idea why he succeeded" ([1])

  • Architect Clippy: "I see you have a poorly structured monolith. Would you like me to convert it into a poorly structured set of microservices?" ([1])

  • Man kicks robot dog. Watching the video, doesn't it make you feel like the man is being cruel? The motion of the robot struggling to regain its balance is so lifelike that it triggers an emotional response. ([1] [2] [3])

  • SMBC comic: "Are we ever going to use math in real life?" ([1])

by Greg Linden (noreply@blogger.com) at March 02, 2015 04:38 PM

Dave Winer

Every reporter should be able to start a blog

Please read Ken Silverstein's piece, his story of First Look Media.

Watching them stay silent for so long, I suspected they lacked basic publishing ability. It made no sense to me. You can set up a blog on wordpress.com or Tumblr, with a custom domain, in at most a couple of hours. Anyone with basic tech knowledge could do this.

With all the talk about learning to code, and the digital native generation, it's kind of appalling that they can't do something as basic as create their own blog, to navigate around any blockage from their management.

Silverstein says, as others have, that there was no prohibition on publishing, they just didn't have a way to do it. To me, that's like saying in 1992 that you couldn't print a document on a laser printer because your boss wouldn't come and choose the New command from the File menu.

There's a basic failure of technological literacy here.

Or so it seems to this outside observer.

The tech could be easier

We're caught in the same trap tech was caught in when I started programming in the mid-70s. There was a priesthood that had no incentive to make things easier, and a built-in belief that things couldn't be easier. My generation had a different vision, we worked on ease-of-use.

WordPress, which is the choice most professional organizations make these days for publishing, never was that easy to begin with. They missed some obvious ideas that were available to be stolen from the previous generation of blogging software. And over the years, a priesthood has developed, and the software has become even more intimidating to the newbie non-technical user.

It's time to loop back the other way. Yes, some reporters should already be able to climb over the hurdles. They just aren't that high, and the current generation of journalists have had computers in their lives, all their lives.

But ease of use, and ease of getting started is something the tech industry should be working on. Yes, it might put you out of a job, but if you don't do it, someone else will. And further, you're supposed to do that -- in the name of progress, and in this case, since it's about publishing, freedom.

March 02, 2015 03:28 PM

Fog Creek

dev.life – Interview with Salvatore Sanfilippo

#wrapperb { width: 100%; overflow: hidden; } #firstb { width: 70%; float:left; } #secondb { padding-left: 30px; overflow: hidden; } body .content .post .entry #secondb img, .previous img, .previous { border: none !important; box-shadow: none !important; } .little { font-size: 75% } .previous span { display: inline; } table.previous, .previous tbody, .previous tr, .previous td { border: 0px !important; border-bottom: 0px !important; background: transparent; text-align: center; }

In dev.life, we chat with developers about their passion for programming: how they got into it, what they like to work on and how.

Today’s guest is Salvatore Sanfilippo, an Open-Source Software Developer at Pivotal, better known as the creator of Redis, the popular data structure server. He writes regularly about development on his blog.

salvatore
Location: Catania, Sicily, Italy
Current Role: Developer at Pivotal

How did you get into software development?

My father was a computer enthusiast when I was a small child, so I started to write simple BASIC programs emulating him. My school had programming courses and it was great to experiment a bit with LOGO, but I definitely learned at home. The first computer I used to write small programs was a TI 99/4A, however, I wrote less-trivial programs only once I got a ZX Spectrum later. I got a bit better when I had the first MS-DOS computer, an Olivetti PC1, equipped with an 8086-clone called Nec V30.

I remember that in my little town there were free courses about programming in the public library. One day I went there to follow a lesson and learned about binary search. It was a great moment for me – the first non-trivial algorithm I learned.

ti

Tell us a little about your current role

Pivotal is very kindly sponsoring me to write open-source software, specifically to work on Redis. So my typical day involves getting up at 7:15 am, taking my son and/or daughter to school, and then sitting in front of a computer from 8:00 am.

What I do then depends on what is urgent: handling the community, replying to messages, checking issues or pull requests on Github, fixing serious problems, or from time to time, designing and implementing new features.

I don’t have a fixed schedule. Since there aren’t many developers contributing to Redis, I’ve found that I can be most effective for the userbase if I’m flexible. Moreover, when I feel tired or annoyed, I use the trick of working on something about Redis that is exciting for me in a given moment. So I likely end the day with something good, even if it was a “bad day”, like when you slept too little for some reason and need a bit more motivation.

Currently I’m working on Redis 3.0.0, that is in release candidate stage right now. This release brings clustering to Redis, and it was definitely challenging. It’s non-trivial to find a set of tradeoffs in order to make a Redis Cluster that looks like Redis single-node, from the point of view of the use-cases you can model with it. In the end, I tried to overcome the limitations by sacrificing certain other aspects of the system, in favor of what looked like the fundamental design priorities.

I’m also working on a new open-source project currently – a distributed message queue. It is the first program I’ve written directly as a distributed system. I’m writing it because I see that many people use Redis as a message broker for delayed tasks, even if it was not designed for this specifically and there are other systems focusing on this task. So I asked myself ‘if Redis is not specifically designed for this, what makes people happy to use it compared to other systems?’ Maybe I can extract what is good about Redis and create a new system that is Redis-like but specifically designed for the task. So Disque was born (code not yet released currently).

When are you at your happiest whilst coding?

My preferred moment is when I write a lot of code, maybe without even trying to compile at all for days. Adding new abstractions, structures, functions, with an idea about how to combine all this to create something. Initially everything is a bit of a blur and you don’t really know if it’s going to work. But, eventually every missing piece starts to be filled with code, and the system makes sense, compiles, and suddenly you see it working on your computer. Something magical happened, something new was created where there was nothing a few days ago. I believe this is an aspect to love about programming.

redis

What is your dev environment?

I use a MacBook Air 11″ and a desktop Linux system that I use mostly via SSH. I have an external monitor and keyboard that I attach to the MBA when I’m at home. I work at home in a room that is half office and half gym. I code sitting, doing small “sprints”, of 45 minutes to a max of 1.5 hours. Then I get up and either grab a coffee, exchange a few words with my Wife, go to collect one of my children or alike.

I don’t like music when I code, as I love the kind of music that needs pretty active listening, like Square Pusher, Venetian Snares, Classical music, and so forth. I don’t stop too late, maybe 6pm, but if I did not reach 8 hours of work, then I work again when my family goes to bed.

I use OS X on the MBA, with iTerm2 and the usual command line tools: Clang and Vim. On the Linux system, I use Ubuntu with GCC, Vim, and Valgrind. Vim and Valgrind are two fundamental tools I use every day.

From the point of view of desktop software, I use Google Chrome, Firefox, and Evernote to take a number of notes while I write code – kind of like a TODO list to avoid losing focus if I think about something the code needs.

I use Twitter a lot for work with the only Twitter client I actually like, and I wish it was more actively developed: YoruFukurou. For IRC, I use Limechat. For messaging, Telegram.

What are your favorite books about development?

A long time ago I enjoyed Structure and Interpretation of Computer Programs (SICP), but I never read it again. Books on algorithms were the ones that turned me from a Spaghetti-coder into something a bit better, probably. I’m not a fan of Best Practices, so I don’t like a lot of books like Design Patterns or The Pragmatic Programmer. Recently I re-read The Mythical Man-Month (TMMM) and The Design of Design, however, I find the latter less useful. I would love to read more actually detailed accounts of software project experiences and TMMM is more like that.

tmmm

What technologies are you currently trying out?

I want an excuse to try Rust and Swift in some non-trivial software project. I don’t see Go as something that is going to serve as a “better C”, so I’m waiting for something new.

I’m not a big fan of new things in IT when they have the smell of being mostly fashion-driven, without actually bringing something new to the table. So I tend to not use the new, cool stuff unless I see some obvious value.

When not coding, what do you like to do?

Spending time with family, being a Father, is something I do and enjoy. I also try to go outside with my Wife without the children when possible, to have some more relaxed, fun together. I enjoy doing power-lifting and I also run from time to time. I like theatre – I used to watch many pieces, and recently I got interested in it again and I’m attending an acting laboratory.

What advice would you give to a younger version of yourself starting out in development?

Don’t fragment your forces into too many pieces. Focus on the intersection of things that at the same time you find valuable, and many people find valuable. Since niches are a bit of a trap.

 

Thanks to Salvatore for taking the time to speak with us. Have someone you’d like to be a guest? Let us know @FogCreek.

 

Previous dev.life Interviews

jon_skeet
Jon Skeet
bob_nystrom1
Bob Nystrom
brian_bondy
Brian Bondy
jared_parsons
Jared Parsons

by Gareth Wilson at March 02, 2015 11:44 AM

John Udell

Tag teams: The collaborative power of social tagging

My first experiences with social tagging, in services like Flickr and Delicious, profoundly changed how I think about managing shared online resources.

The freedom of an open-ended folksonomy, versus a controlled taxonomy, was exhilarating. But folksonomies can be, and often are, a complete mess. What really got my attention was the power that flows from disciplined use of tags, governed by thoughtful conventions. Given a set of resources -- photos in the case of Flickr, URLs in the case of Delicious -- you could envision a set of queries that you wanted to perform and use tags to enable those queries.

To read this article in full or to leave a comment, please click here

by Jon Udell at March 02, 2015 11:00 AM

March 01, 2015

Mark Bernstein

Garment of Shadows

Mary Russell finds herself in Morocco. She has misplaced her memory; she can’t quite remember who she is or how she finds herself in Marrakech. We know (though she does not) that she has also misplaced her husband, Mr. Sherlock Holmes. Intrigue and action in 1920s North Africa ensues, with a lovely portrait of Hubert Lyautey, the French resident-general, and of his expert and capable majordomo, Youssef.

March 01, 2015 05:24 PM

Squiggle Birds

Dave Grey’s Squiggle Birds,

a five minute exercise to convince people that, yes, they can draw well enough.

March 01, 2015 04:49 PM

February 28, 2015

Erlang Factory

Erlang User Conference: Call for Talks open until 10 March

The conference will take place on 9-10 June. It will be followed by one day of tutorials on 11 June and 3 days of expert training on 11-13 June. 

We are currently accepting talk submissions for the Erlang User Conference 2014. If you have an interesting project you are working on or would like to share your knowledge, please submit your talk here. The deadline is 10th of March.

February 28, 2015 02:17 AM

ONE DAY ONLY: 20% off all training courses at Erlang Factory SF Bay Area Starts 6 am - Ends 11.59 pm PST on 23 Jan

Erlang Express - Robert Virding 3-5 March
Erlang for Development Support - Tee Teoh 3-5 March
Cowboy Express - Loïc Hoguin 3-5 March
OTP Express - Robert Virding 10-12 March
Brewing Elixir - Tee Teoh 10-12 March
Riak- Nathan Aschbacher 10-12 March
Kazoo - James Aimonetti 10-12 March

http://www.erlang-factory.com/conference/show/conference-6/home/#home

February 28, 2015 02:17 AM

Erlang Factory SF Bay Area : 40 Very Early Bird tickets released on 16 December

On Monday 16 December 2013, after 12:00 PST we will release a batch of 40 tickets at the Very Early Bird rate of $890. 

The new and improved website for the Erlang Factory San Francisco Bay Area is coming out on Monday, and it will feature a first batch of over 25 speakers. 

http://www.erlang-factory.com/conference/SFBay2014

February 28, 2015 02:17 AM

February 27, 2015

Dave Winer

Problem with Scripting News in Firefox?

I was working with Doc Searls this afternoon, and saw how Scripting News looks in the version of Firefox he has running on his laptop. It looks awful. One tab is visible all scrunched up in a corner of the window.

I have the latest Firefox on my Mac and it looks fine. All the tabs are where they are. If you're seeing the problem on your system and have any idea what the problem might be, please leave a comment below. It really bothers me that what Doc is seeing is so awful.

February 27, 2015 10:23 PM

Excuse the sales pitch

First, thank you for reading this blog.

Now I want to try to sell you on an idea.

The idea: Supporting the open web.

Everywhere you look things are getting siloized, but for some reason, I must be an idiot, but I keep making software that gives users the freedom to choose. If my software isn't the best for you, now or at any time in the future, you can switch to whatever you like.

I make it because I dream of a world where individuals have power over their own lives, and can help inform each other, and not be owned by companies who just want to sell them stuff they don't want or need.

I work hard. And I stay focused on what I do. But my ideas don't really get heard that much. And that's a shame, because if lots of people used my software, that would encourage more people to make software like it, and eventually we'd be back where we were not that long ago, with you in charge of your own web presence.

Anyway, I'm almost finished. I spent two years porting everything I care about to run in Node.js and in the browser. I don't plan to dig any more big holes. This is basically it.

What can you do? Well honestly, if you see something you think is empowering or useful here on my blog, please tell people about it. The tech press will not be covering it, so you can't find out about it that way. It would be a shame to have put all this effort into creating great exciting open software, only to have very few people ever hear about it.

Thanks again for reading my blog, and I hope to be making software for you and your friends for a long time to come.

PS: The place to watch, for the new stuff, the stuff that has the most potential of rewriting the rules of the web, is on my new liveblog. That's where I'm telling the story I think is so important.

February 27, 2015 04:51 PM

Mark Bernstein

Pardon The Dust

I’ve been busy as a beaver, modernizing the core classes that are the foundation of Tinderbox export templates. These have decent tests, but this weblog (especially its main page) is a key acceptance test; if you notice things that are not right, Email me. Thanks!

February 27, 2015 02:14 PM

Pardon The Dust

I’ve been busy as a beaver, modernizing the core classes that are the foundation of Tinderbox export templates. These have decent tests, but this weblog (especially its main page) is a key acceptance test; if you notice things that are not right, Email me. Thanks!

February 27, 2015 02:14 PM

Fog Creek

A Developer’s Guide to Growth Hacking – Tech Talk

 

Given the media hype that surrounds the term ‘Growth Hacking’, you can be forgiven for dismissing the whole thing as another marketing buzzword. But what can get lost in the hubbub are some useful, development-inspired, working practices that can help a team focus on maximizing growth.

In this Tech Talk, Rob Sobers, Director of Inbound Marketing at Varonis, tells you all you need to know about Growth Hacking. Rob explains what Growth Hacking is and describes the processes key for it to be effective – from setting goals, to working through an experimentation cycle and how it works in practice.

Rob was formerly a Support Engineer here at Fog Creek, and is the creator of his own product, Munchkin Report. He writes on his blog about bootstrapping and startup marketing.

 

About Fog Creek Tech Talks

At Fog Creek, we have weekly Tech Talks from our own staff and invited guests. These are short, informal presentations on something of interest to those involved in software development. We try to share these with you whenever we can.

 

Content and Timings

  • What is Growth Hacking (0:00)
  • People (2:34)
  • Process (3:22)
  • Setting Goals (5:25)
  • Experimentation Cycle (6:12)
  • How It Works In Practice (12:03)

 

Transcript

What is Growth Hacking

I was a developer, started out my career as a developer, kind of moved into the design space and then did customer support here, and then now I’m doing marketing. I’ve been doing marketing for the past, I don’t know, two and a half three years almost. This sort of like, phrase, growth hacker kind of cropped up. I kind of let the phrase pass me by. I just didn’t discuss it. I didn’t call myself a growth hacker. I stayed completely out of it, mainly because of stuff like this.

It’s just overwhelming. Like Google ‘growth hacking’, you’ll want to throw up. What it really comes down to is that growth hacking is not at all about tactics. It’s not about tricks. It’s not about fooling your customers into buying your software or finding some secret lever to pull that’s hidden that’s going to unlock massive growth for your company. It’s really about science. It’s about the process. It’s about discipline. It’s about experimentation. Tactics are inputs to a greater system.

If someone came up to you, you’re a Starcraft player and said, “What tactic should I use?” You would have a million questions, “Well what race do you play? Who are you playing against? Who’s your opponent? What does he like to do? What race is he playing? Is it two vs. two or is it three vs. three?” There’s so many different questions. So if someone comes up to me and says, “What tactics? What marketing channels should I use for my business?” You can’t answer it. The answer is not in the tactics.

So this is what Sean Ellis, this is how he defines growth hacking. He says, “Growth hacking is experiment driven marketing.” You walk into most marketing departments, and they’ve done a budget, and they sit in a room, and they decide how to divvy up that money across different channels. “Okay, we’ll buy some display ads. We’ll do some Google Ad Words. We’ll invest in analyst relations,” but they’re doing it blind. Year after year, they’re not looking at the results, looking at the data, and they’re not running experiments. So it’s really kind of blind. So this is really the difference.

I took it one step further. I said growth hacking is experiment-driven marketing executed by people who don’t need permission or help to get things done, because I think growth hacking’s a lot about the process. And it’s about culture, and embracing the idea of doing a whole bunch of marketing experiments week over week. But if you have a team that is only idea-driven, and tactic driven, and then they have to farm out all of the production to multiple other stakeholders in the business like teams of Devs or Designers, then you’re not able to iterate. So to simplify it I just said, “Growth hacking equals people, people who have the requisite skills to get things done from start to finish, and process.”

People

So let’s talk about people. You don’t just wake up in the morning and just say like, “Let’s do some marketing.” You have to know what your goals are and then break it down into little pieces, and then attack based on that. So this is a system, that was devised by Brian Balfour at HubSpot. I call it the Balfour method. A good way to measure a person, when you’re hiring to be a growth hacker and run growth experiments, is to show them this chart and say, “Well how far around the wheel can you get before you need to put something on somebody else’s to-do list?” Now granted you’re not going to always be able to hire people that can do everything. I’ve seen it work where people can do bits and pieces, but it sure is nice to have people who can do design and development on a growth team.

Process

So before you begin implementing a process at your company, what you want to do is establish a method for testing. And then you need analytics and reporting. I’ve seen a lot of companies really miss the boat with their analytics. They’ve got it too fragmented across multiple systems. The analytics for their website is too far detached from the analytics that’s within their products. Because you don’t want to stop at the front-facing marketing site. It’s great to run A/B tests, and experiment on your home page, and try and get more people to click through to your product page and your sign-up page, but then also there are these deep product levers that you can experiment with your onboarding process, and your activation and your referral flow.

So what you’re really looking for, and the reason why you establish a system and a method, is number one to establish a rhythm. So at my company we were in a funk where we were just running an A/B test every now and then when we had spare time. It’s really one of the most high-value things we could be doing, yet we were neglecting to do it. We were working on other projects. The biggest thing we did was we had implemented this process which it forces us to meet every Monday morning and discuss, layout our experiments, really define what our goals are, and establish that rhythm.

Number two is learning, and that basically is that all the results of your experiments should be cataloged so that you can feed them back into the loop. So if you learned a certain thing about putting maybe a customer testimonial on a sign-up page, and it increases your conversion by 2%, maybe you take a testimonial and put it somewhere else where where it might have the same sort of impact. So you take those learnings and you reincorporate them, or you double down.

Autonomy, that goes back to teams. You really want your growth team to be able to autonomously make changes and run their experiments without a lot of overhead. And then accountability, you’re not going to succeed the majority of the time. In fact you’re going to fail most of the time with these experiments. But the important thing is that you keep learning and you’re looking at your batting average and you’re improving things.

Setting Goals

So Brian’s system, it has a macro level and a micro level. You set three levels of goals. One that you’re most likely to hit. So 90% of the time you’ll hit it, another goal at which you’ll hit probably 50% of the time, and then a really reach goal which you’ll hit about 10% of the time. And then an example would be let’s improve our activation rate by X%. This is our stated goal. Now for 30 to 60 days let’s go heads down and run experiments until the 60 days is up, and we’ll look and see if we hit our OKRs with checkpoints along the way. So now you zoom in and you experiment. So this is the week-by-week basis. So every week you’re going through this cycle.

Experimentation Cycle

So there’s really four key documents as part of this experimentation cycle. The first is the backlog. That’s where you catalog. That’s where you catalog all your different ideas. Then you have a pipeline which tells you what you’re going to run next, as well as what you’ve run in the past. So that somebody new on the team can come in and take a look and see what you’ve done to get where you are today. Then is your experiment doc which serves sort of as a specification.

So when you’re getting ready to do a big test, like let’s say you’re going to re-engineer your referral flow, you’re going to outline all the different variations. You’re going to estimate your probability for success, and how you’re going to move that metric. It’s a lot like software development as you’re estimating how long somethings going to take, and you’re also estimating the impacts. And then there’s you’re play-books, good for people to refer to.

So with Trello it actually works out really well. So the brainstorm column here, the list here, is basically where anybody on the team can just dump links to different ideas, or write a card up saying, “Oh, we should try this.” It’s just totally off the cuff, just clear out whatever ideas are in your head and you dump them there. So you can discuss them during your meeting where you decide which experiments are coming up this week.

The idea is that you actually want to do go into the backlog. The pipeline are the ones that I’m actually going to do soon, and I’ll make a card, and I’ll put it in the pipeline. And then when I’m ready to design the experiment, I move it into the design phase, and then I create the experiment doc. And then I set my hypothesis, “I’m going to do this. I think it’s going to have this impact. Here are the different pages on the site I’m going to change, or things within the product I’m going to change.” And then later in the doc, it has all of the learnings and the results.

So one key tip that Brian talks about is when you’re trying to improve a certain metric, rather than saying, “Okay, how can we improve conversation rate?” You think about the different steps in the process. It just sort of helps you break the problem in multiple chunks, and then you start thinking a little bit more appropriately. And this is actually where the tactics come into play when you’re brainstorming, because this is where you’d want to actually look to others for inspiration. If you’re interested in improving your referral flow, maybe use a couple of different products, or think about the products that you use where you really thought their referral flow worked well, and then you use that as inspiration to impact yours. You don’t take it as prescription. You don’t try and like apply it one-to-one, but you think about how it worked with their audience and then you try to transfer it over to how it work with yours.

Prioritization, there’s really three factors here. You want to look a the impact, potential impact. You don’t necessarily know but you want to sort of gauge the potential impact should you succeed with this experiment. The probability for success, and this can be based on previous experiments that were very close to this one. So like I mentioned earlier, the customer testimonial you had a certain level of success with that in one part of your product on your website, and you’re going to just reapply it elsewhere. You can probably set the probability to high, because you’ve seen it in action with your company with the product before.

But if you’re venturing into a new space, let’s say like Facebook ads. You never run them for your product before. You don’t know what parameters to target. You don’t know anything about how the system works and the dayparting and that, then you probably want to set the probability to low. And then obviously the resources. Do I need a marketer? Do I need a designer, a developer and how many hours of their time?

So once you move something into the pipeline, I like to have my card look like this. I have my category, my label. So this is something with activation, trying to increase our activation rate. And then I say, “It’s successful. This variable will increase by this amount because of these assumptions.” Then you talk with your team about these assumptions, and try and explain why. So the experiment doc, I had mentioned before, this is sort of like your spec. I like doing this rather than implement the real thing upfront, if you can get away just putting up a landing page, and then worrying about the behind the scenes process later, do that. Like if you’re thinking about changing your pricing. Maybe change the pricing on the pricing page, and then not do all the accounting, billing code, modifications just yet.

Implement, there’s really not much to say about that. The second to last step is to analyze. So you want to check yourself as far as that impact. Did you hit your target? Was it more successful than you thought, less successful? And then most importantly, why? So really understanding why the experiment worked. Was it because you did something that was specifically keyed in on one of the emotions that your audience has? And then maybe you carry that through to later experiments.

And then systemize, another good example of systemizing that actually comes from HubSpot is the idea of an inbound marketing assessment. It’s actually their number one Lead Gen channel which is they just offer, for any company that wants, they’ll sit down one-on-one and do a full assessment of their website, of their marketing program, et cetera. When they were doing these one-on-one discussions those became their best leads most likely to convert.

So they made something called Website Grader which is something you could find online, and it’s sort of like the top of the funnel for that marketing assessment where someone’s like, “Ah, I don’t know if my website’s good at SEO, and am I getting a lot of links?” like that. So they’ll plug it into the marketing grader. It’ll go through and give them a grade. They’ll get a nice report, and then a sales rep in the territory that that person lives will now have a perfect lead-in to an inbound marketing assessment. Which they know is a high-converting activity should someone actually get on the phone with their consultant. So it’s a good example of productising.

How It Works In Practice

So this is just sort of like how the system works. So Monday morning we have … It’s our only meeting. It’s about an hour and a half, and we go through what we learned last week. We look at our goals, make sure we’re on track for our OKR, which is our Objective and Key Result. And then we look at what experiments we’re going to run this week, and then the rest of the week is all going thought that rapid iteration of that cycle of brainstorming, implementing, testing, analyzing, et cetera.

So you kind of go through these periods of 30 to 90 days of pure heads-down focus, and then afterwards you zoom out, and you say, “How good am I at predicting success of these experiments? Are we predicting that we’re going to make a big impact, or making small impacts? Is our resource allocation predictions accurate?” And then you want to always be improving on throughput. So if you were able to run 50 experiments during a 90-day period, your next 90 days you want to be able to run 55 or 60. So you always want to be improving.

by Gareth Wilson at February 27, 2015 11:33 AM

February 26, 2015

Fog Creek

4 Steps to Onboarding a New Support Team Hire

newSupportHire (1)
At Fog Creek, every now and then the Support team has the pleasure of onboarding a new member. We know what most of you are thinking “Wait! Did they just say ‘pleasure’?” Yes, yes we did. Team onboarding does not have to be an irksome obstacle in your day-to-day work – it’s a key milestone for your new hire’s long-term success and the process should be repeatable and reusable.

If you’ve ever been in support, you know there can be a lot of cached knowledge representing the status quo, and there is usually, and sometimes exclusively, a “show and tell” style of training. This is the fire drill of knowledge transfer, and it’s an arduous process for all concerned. Not only does it take longer for the new hire to get up to speed, but you also have at least one person no longer helping your customers. For anyone who has ever worked in a queue, we’re sure you can agree that when someone steps out, the team feels the impact immediately.

Our Support team mitigates this by using a well-documented onboarding process. And we do it without a giant paperweight… err training manual. Similar to how we onboard new hires to the company, we also leverage Trello to onboard our new Support team hires. The items on the board are organized and the new hire just works down the list.

The board separates out individual items and team-oriented items. This keeps the new person accountable for their tasks, and it keeps the team involved so that they don’t accidentally abandon them.

support team onboarding new hire trello board

1. Read Up on the Essentials

The first item on the board is titled “What to read on your first day”. This card links to a wiki page that talks about the things the new person needs to know before they can do any real work.

Next, is the “Support Glossary”. This is essential as they’re going to hear words, phrases, and acronyms galore. So scanning through this card helps them start to get a feel for the “lingo”.

With this done, it’s time to join the company chat and get a few nice “hellos” and introductions from other folks in the company. Primarily, this stage helps them to start assimilating the knowledge they’ll need to be successful in the role.

The assimilation process starts with briefly describing the Support team’s role and responsibilities within the organization. This covers our two main workflows: interrupts and queue-based. Then we move on to our guiding customer service principles.

After reading several more cards, which each link off to wiki pages, the new person moves them over to the ever-so-rewarding “Done” column. Starting to feel accomplished, they can start to get their hands dirty.

You may be wondering “couldn’t they just have one card and link to a wiki page with a list of articles?” Sure. But, that process tends to be more of a rabbit hole, and we want our Support team hires to have just the right amount of information in phases, and not dumped on them all at once.

2. Dogfood Until it Hurts

supportTeam2 (1)
After reading for what probably feels like weeks (not really, a day maybe), the new person starts using our products. Since we dogfood our own products, this is a great way to discover and learn about them. They can later use this experience to relate to, and help new customers. They create production accounts, staging accounts, and start a series of configurations. This helps them get into the Support workflow.

3. Go Under the Hood

Configuring web application sites isn’t all that hard, so we up the challenge. The new hire starts creating any necessary virtual machines (VM). Each one is identified on separate cards on the board, naturally. These VMs aid the new Support member in troubleshooting customer environments by replicating them as best as they can.

Since Kiln and FogBugz sit on top of databases, the new person also starts to configure those systems and get familiar with the database schemas. This helps build an understanding of our products’ foundations.

Once they have what we call the “basics”, they can start tricking out their dev machine. This card links to another board with all the juicy details maintained by all devs in the company.

4. Get Immersed in the Workflow

supportTeam (1)
There are a several more cards which discuss process and procedures. These include when to use external resources, where they are located, and how to use them.

A key part of Support is a robust workflow. The team helps the new person get immersed into the workflow by adding them to scripts, giving the repository permissions in Kiln, adding them to recurring team calendar events, and so on. Most importantly, they start to see how the Support team shares knowledge and work on some real customer cases where they will be helping our customers be amazingly happy!

We’ve found that using a lightweight, but clearly defined process, to onboard a new hire to our Support team is key to their efficiency and long-term success. It helps the new hire become self-sufficient, as well as know where they can go for help as they gain experience.

by Derrick Miller at February 26, 2015 11:22 AM

February 25, 2015

Dave Winer

Comments on the Node Foundation

Eran Hammer posted a long piece yesterday about why he does not support a Node Foundation.

I am a relative newcomer to Node, having started developing in it a little over a year ago. I've shipped a number of products in Node. All my new server software is running in Node, most of it on Heroku. I love Node. Even though it's a pain in the ass in some ways, I've come to adore the pain, the problems are like crossword puzzles. I feel a real sense of accomplishment when I figure it out.

The server component of my liveblog is running in Node, for example.

I am new to Node but I also have a lot of experience with the dynamics Hammer is talking about, in my work with RSS, XML-RPC and SOAP. What he says is right. When you get big companies in the loop, the motives change from what they were when it was just a bunch of ambitious engineers trying to build an open underpinning for the software they're working on. All of a sudden their strategies start determining which way the standard goes. That often means obfuscating simple technology, because if it's really simple, they won't be able to sell expensive consulting contracts. He was right to single out IBM. That's their main business. RSS hurt their publishing business because it turned something incomprehensible into something trivial to understand. Who needs to pay $500K per year for a consulting contract to advise them on such transparent technology? They lost business.

IBM, Sun and Microsoft, through the W3C, made SOAP utterly incomprehensible. Why? I assume because they wanted to be able to claim standards-compliance without having to deal with all that messy interop.

As I see it Node was born out of a very simple idea. Here's this great JavaScript interpreter. Wouldn't it be great to write server apps in it, in addition to code that runs in the browser? After that, a few libraries came along, that factored out things everyone had to do, almost like device drivers in a way. The filesystem, sending and receiving HTTP requests. Parsing various standard content types. Somehow there didn't end up being eight different versions of the core functionality. That's where the greatness of Node comes from. We may look back on this having been the golden age of Node.

There are reasons why, once a technology becomes popular, it's very hard to add new functionality. All the newcomers want to make a name for themselves by authoring one of the standard packages. Everyone has an idea how it should be done, and won't compromise. So what happens is very predictable, and NOT BAD. The environment stops growing. I saw that happening in RSS, as all the fighting over which way to rip up the pavement and start over took over the mail lists. So when RSS 2.0 came out I froze it. No more innovation. That's it. It's finished. If you want to do new stuff, start a module (very much like the NPM packages of Node). Luckily, at that moment, I had the power to do that, as Joyent did with Node, when embarking on this ill-advised foundation track.

Now there are many things about Node culture that I don't understand, being a newbie, as I am. But based on what I know about technology evolution from other contexts, the lack of motion at Joyent wasn't a problem, it was realistic. It was what I, as a removed-from-the-fray developer want. I want this platform to stay what it is. I want to rock and roll in my software, not be broken every time someone decides we should hit the ball from one side of the plate, then the other, then back to the original. Nerd debates about technology never end, until someone puts their foot down and says, no more debates, it's done. And the confusion in those debates is always manipulated by the BigCo's who have motives that we'd be happier not really understanding. I know I would be. Nightmarish stuff.

I love Node. I want it to be solid, that's the most important thing to me. I'd love to see a list, in a very simple newbie-friendly language, that explains what it is that Node needs so desperately to justify both the fork, and the establishment of this foundation. Seems to me we might be pining for the good old days of last year before too long. ;-(

PS: Hat-tip to the io.js guys. In the RSS world, the forkers claimed the right to the name. At least you guys had the grace to start with a new name, so as not to cause the kind of confusion the RSS community had to deal with.

February 25, 2015 03:54 PM

Fog Creek

Help Work to Flow – Interview with Sam Laing and Karen Greaves

.little {font-size: 75%}

Looking for audio only? Listen on

 

We’ve interviewed Sam Laing and Karen Greaves, Agile coaches and trainers at Growing Agile, in Cape Town, South Africa. Together they wrote ‘Help Work to Flow’, a book with more than 30 tips, techniques and games to improve your productivity. We cover how software developers can improve their productivity and manage interruptions, why feedback and visible progress are important to staying motivated and how teams can hold better meetings.

They write about Agile development techniques on their blog.

 

Content and Timings

  • Introduction (0:00)
  • Achieving Flow (2:06)
  • Importance of Immediate Feedback (3:07)
  • Visible Progress (4:27)
  • Managing Interruptions (5:42)
  • Recommended Resources (8:50)

 

Transcript

Introduction

Derrick:
Today we have Sam Laing and Karen Greaves who are Agile coaches and trainers at Growing Agile based in South Africa. They speak at conferences and write about Agile development. They’ve written 8 books between them, including their latest book, Help Work to Flow, part of the Growing Agile series. Sam and Karen, thank you so much for taking your time to join us today all the way from South Africa. Why don’t you say a bit about yourselves?

Karen:
We both have worked in software our whole careers and we discovered Agile somewhere along the line about 8 years ago and figured out it was a much better way to work. Then in 2012, so just over 3 years ago, we decided that’s what we wanted to do, was help more people do this and figure out how to build better software. So we formed our own company, Growing Agile, it’s just the 2 of us and we pair work with everything we do, so you’ll always find us together.

Really what we do is just help people do this stuff better and use a lot of common sense. What’s been quite exciting in the last few years is now being business owners of our own small business. We’re getting to try lots of Agile techniques and things for ourselves and it’s lots of fun and a constant journey for us.

Derrick:
I wanted to touch on the book, Help Work to Flow, what made you want to write that?

Sam:
What actually happened is we joined the meetup called the Cape Marketing meetup and it’s for small business owners to learn about marketing techniques and how to market their small businesses. In doing that we realized that a lot of them are very, very busy and think they need to hire people to help them with their admin. We ran a mini master class with them on Kanban on how to use that as a small business owner. They absolutely loved it. They’re still sending us pictures of their boards and how it’s helping them with their workflow. We are like, well actually we have a lot of these tips that could help other people so let’s just put them together into a book.

Achieving Flow

Derrick:
I want to touch on flow for a bit, it’s often elusive to developers. How can developers help themselves achieve flow?

Sam:
The first thing that pops in is pairing. To help with flow, having a pair, this is my pair, helps a lot. When someone’s sitting next to you, it’s very difficult to get distracted with your emails and with your phone calls and with something else because this person right here is keeping you on track and keeping you focused.

Another one would be to avoid yak shaving. I’m an ex-developer, I know exactly how this happens. You’re writing a function for something and you need to figure out some method and you go down Google and then you go into some other chat room. Next thing you’re on stack overflow and you’re answering questions that other people have asked that has nothing to do with what you were doing. Again, to have a pair to call you on that yak shaving is first prize but otherwise to recognize when you personally are yak shaving, bonus points.

Importance of Immediate Feedback

Derrick:
Enabling immediate feedback is said to be key to achieving flow. How can this be applied within the context of software development?

Karen:
What we see lots of software teams do, is they all understand that code reviews are good and if they’re not doing pair programming then they should do code reviews. You even get teams where testers get their test cases reviewed but often what we see is teams leave that kind of to the last minute. We’ve done everything now let’s just quickly do the review.

One of the things we teach teams is instead of thinking of it as a review which is a big formal thing you do at the end, we use something that’s just called show me. I think we got it from Janet Gregory who wrote the book on Agile Testing. Literally as soon as you’re done with a small task, something that took you an hour maybe 2, you grab some else on your team or working with you and you show them for 5 minutes.

Like “Hey, could you just come and see what I did?” They quickly look at it, you get immediate feedback. Does it meet what their expectation was? Are there any obvious issues. The great thing about reviews where you get to explain what you did, sometimes you find the issues yourself. Definitely using that show me idea, versus making reviews a big thing. We’ve seen that have a radical changes in feedback for teams.

Visible Progress

Derrick:
The book highlights the importance of clear goals and visible progress. What do you mean by visible progress and why is it important to flow?

Sam:
When you physically write down what’s top of mind, it kind of enters your brain so you don’t have to worry about those things anymore because you’ve written them down. If you’ve got, imagine a board in front of you with all these things that have been popping off of your mind, when you move those stickies across to done, you get this sense of achievement.

You automatically sending information out to those around you and to yourself on how much you’ve done versus how much you’ve still got to do. It helps you to get into the flow of getting stuff done. It also helps other people realize how busy you are and whether they should or shouldn’t add more to your workflow.

Karen:
It’s really interesting, earlier today our personal task board, which is a physical board that sits between us, was like overwhelmed with too much stuff. I was like, “I just feel like we’re not getting anything done.” Sam took all the stickies and she put them on another piece of paper and said, “Okay, here are the 8 we have to do today.” Just looking at a board with 8 things, I said, “Okay, I can do 8 things today.” Before I was like, “I can’t even start anything because there’s so much stuff here.” That really, really helped.

Managing Interruptions

Derrick:
Here at Fog Creek every developer has a private office, to help them sort of reduce interruptions. What other techniques can you recommend to help developers manage interruptions?

Karen:
Firstly, you’re quite lucky, most of the teams we encounter here are using open-plan offices and personal offices are quite rare but one of the teams we worked with had this great idea and it’s in the book and its called the point of contact for emergencies and it’s pronounced as POCE. What happens with that is they, in the team they agree who’s the person for this week who’s going to be the interrupt-able person and they even put a sign on their doors, going if you’ve got an urgent issue speak to Adrian, Adrian is going to be the POCE this week.

They rotate that within the team and so that persons going to have a lot of interruptions that week but everyone else is a lot less interrupt. The rule is, that person can then talk to team members if they can’t solve the problem but at least then team members are being interrupted by one person who’s part of their team and probably not as often as outside. That’s one idea that we can use.

Sam:
Another one is, if you do get interrupted, don’t immediately do that interruption. Add it to your task list, add it to the list of work that you think you’re going to do that day and prioritize it accordingly. Often if you get disrupted with, do this and do that and do that, you’ve pushed your whole day out of sync and they’re usually not as important as other things.

Derrick:
We’ve touched on ways that an individual developer can help themselves but how can dev management go about encouraging flow across whole teams or organizations?

Sam:
One of the biggest areas that we see waste are meetings. Often you have a lot of people involved in a meeting, it’s poorly facilitated or not facilitated at all. Also the actual goal of the meeting, sometimes is just to talk about what’s going to happen in the next meeting, which is atrocious and such a waste of time.

For management to encourage having facilitators, encourage people to get facilitation skills so that these meetings aren’t such a waste of time would already be a huge money and time saver. Also to look at having meeting free days so that people aren’t rushing from one meeting to the next or only have one hour of work between 3 meetings. Rather have one day that you are completely immersed in meetings and not going to do any work and then 2 days of no meetings, where you could focus.

Karen:
I mean, we came up with that because we found with us being coaches and trainers, we’re often meeting new clients, giving them proposals or whatever. We go around quite a lot and sometimes we do onsite work and then we’re in the office. We just felt like we were getting nothing done. We did it the other way around, we came up with out-of-office days, so on 2 days a week we’ll be out of the office and everything has to fit into one of those 2 days if we’re meeting someone else. Then the other 3 days we had in the office and we actually got a lot of things done. That’s where we identified that and it still works really, really well for us.

Recommended Resources

Derrick:
What are some resources that you can recommend for people wanting to learn more about some of these types of techniques?

Karen:
Really if you just Google productivity tips, you’ll find a whole bunch. The trick is that they’re not all going to work for you, so try a couple.

Sam:
Even the ones in our book, not everything is going to work for you and it’s definitely not all going to work at the same time.

Karen:
Our advice is find tips wherever you can, try them and then keep the ones that work.

Derrick:
Sam and Karen, thank you so much for joining us today.

Karen:
Thank you.

Sam:
Thanks very much.

by Gareth Wilson at February 25, 2015 11:52 AM

February 24, 2015

Dave Winer

My 'Narrate Your Work' page

I now finally have a Narrate Your Work public page.

http://reader.liveblog.co/davewiner

This has been a goal of mine for many years.

I had a worknotes outline a few years back, but it wasn't like this.

It was published. This is just what I type as I type it.

I still have another level of project management in an outline that is not public, can't be, because it contains private information.

But I'm going to move more and more into the liveblog.

Also looking forward to actually liveblogging a live event with it. If I knew/cared more about the Oscars I would have done that. Maybe the NBA playoffs? We'll see.

February 24, 2015 10:33 PM

How to fix the Internet economy

Suppose you watched a movie illegally, because it was convenient to watch it at home, and you feel like a jerk for not paying the $15 for a seat in the theater.

There are lots of reasons not to go to a theater. Poor sound quality. People who bring infants to the theater, or talk about the movie loudly as if we were in their living room.

What if?

What if there was a way to pay for the movie, after-the-fact?

No movie chain is going to do this, so why not start a proxy for them?

  1. You'd log into a central site, if you're new, enter your credit card info

  2. Navigate to the page for the movie you just watched.

  3. Click the box, and click OK, and you've paid the fee.

I don't think it should be a voluntary amount. It should the the price of a ticket, either the average price, or the price at a theater local to you (think about this). This isn't charity, it's about convenience. You're not trying to avoid paying for the movie-watching experience, or negotiate a better price.

Where does the money go?

This is where it gets interesting.

At first, at least, no movie company is going to want this to exist. They might try to sue it out of existence. In the meantime, the money goes into escrow accounts, earning interest, to be paid to the rights-holder, when they demand it. Who is the rights holder? That's probably a large part of what will be litigated.

It would be like the lottery, the jackpot would keep growing, and the pressure would build (think about shareholders) to just take the money. And if that were to happen, we would have a whole new way of doing content on the Internet, for pay.

Why?

It's a way to bootstrap a new economy, one that will be useful in other contexts.

For example, I was just reminded that I could pay the New Yorker $1 per month to read all their articles. I would totally do this. If I didn't have to create an account, and give them all my info, and be subject to all the marketing these guys do. The price isn't just $1 a month. That's just what you pay so they can begin to aggressively upsell you. But I'd like to give them the money.

We need a middle-man here, some entity that doesn't belong to the vendors or the users. It cares equally for both. This would ultimately be good for the vendors, because the system is very inefficient.

February 24, 2015 09:30 PM

Question about JavaScript and XML

Update: I came up with a different solution, so this is no longer a priority for me.

I want to write a routine that emojifies an OPML file.

https://gist.github.com/scripting/bcd76324a27b5567874b

It takes OPML text as input, parses it, walks the whole structure, and emojifies the text, and then replaces the original text with the new text.

At the end of the traversal, it re-serializes the XML text, and returns it.

However, I get an error at the crucial point, the call to serializeToString.

Uncaught TypeError: Failed to execute 'serializeToString' on 'XMLSerializer': Invalid node value.

February 24, 2015 07:57 PM

John Udell

GitHub for the rest of us

There's a reason why software developers live at the leading edges of an unevenly distributed future: Their work products have always been digital artifacts, and since the dawn of networks, their work processes have been connected.

The tools that enable software developers to work and the cultures that surround the use of those tools tend to find their way into the mainstream. It seems obvious, in retrospect, that email and instant messaging -- both used by developers before anybody else -- would have reached the masses. Those modes of communication were relevant to everyone.

[ Also on InfoWorld: Git smart! 20 essential tips for Git and GitHub users | Navigate the rapid advances in programming with hot trends and cold spells for coders and 15 technologies that are changing how developers work. | Keep up with hot topics in app dev with InfoWorld's Application Development newsletter. ]

It's less obvious that Git, the tool invented to coordinate the development of the Linux kernel, and GitHub, the tool-based culture that surrounds it, will be as widely relevant. Most people don't sling code for a living. But as the work products and processes of every profession are increasingly digitized, many of us will gravitate to tools designed to coordinate our work on shared digital artifacts. That's why Git and GitHub are finding their way into workflows that produce artifacts other than, or in addition to, code.

To read this article in full or to leave a comment, please click here

by Jon Udell at February 24, 2015 11:00 AM

February 23, 2015

Tim Ferriss

TF-StitcherButton

The Tim Ferriss Show - Glitch Mob

(Photo: Ralph Arvesen)

Justin Boreta is a founding member of The Glitch Mob. Their music has been featured in movies like Sin City II, Edge of Tomorrow, Captain America, and Spiderman.

In this post, we discuss The Glitch Mob’s path from unknown band to playing sold-out 90,000-person (!) arenas.  We delve into war stories, and go deep into creative process, including never-before-heard “drafts” of blockbuster tracks!  Even if you have zero interest in music, Justin discusses habits and strategies that can be applied to nearly anything.  Meditation?  Morning routines?  We cover it all.

TF-ItunesButton TF-StitcherButton

The Glitch Mob’s last album, Love Death Immortality, debuted on the Billboard charts at #1 Electronic Album, #1 Indie Label, and #4 Overall Digital Album. This is particularly impressive because The Glitch Mob is an artist-owned group.  It’s a true self-made start-up.

This podcast is brought to you by Mizzen + Main. Mizzen + Main makes the only “dress” shirts I now travel with — fancy enough for important dinners but made from athletic, sweat-wicking material. No more ironing, no more steaming, no more hassle. Click here for the exact shirts I wear most often. Order one of their dress shirts this week and get a Henley shirt (around $60 retail) for free.  Just add the two you like here to the cart, then use code “TIM” at checkout.

This episode is also brought to you by 99Designs, the world’s largest marketplace of graphic designers. Did you know I used 99Designs to rapid prototype the cover for The 4-Hour Body? Here are some of the impressive results.

QUESTION(S) OF THE DAY: What music do you listen to when you work? When you really need to get in the zone? Please share in the comments.

Do you enjoy this podcast? If so, could you please leave a short review here? I read them, and they keep me going.

Scroll below for links and show notes…

Selected Links from the Episode

Website | FacebookTwitter | Instagram | YouTube

Learn More about The Glitch Mob

Live

Official

Movies

Commercial Work

Show Notes (Time Stamps Approximate)

  • World-class attributes of Justin Boreta
  • The Grant Korgan story
  • Unique attributes of The Glitch Mob and the feeling of being on stage in front of 90,000+ people
  • Defining “indie” and “artist owned”
  • The makeup and evolution of The Glitch Mob team
  • Tools and software of The Glitch Mob
  • What exactly is “mastering”?
  • Deconstructing audio engineering software and Ableton
  • How to have your music featured in massive motion pictures
  • The story of the Sin City II trailer
  • Justin plays Animus Vox [approx 36:30]
  • The fourth member, Kevin, and his role in the success of the business
  • Developing the creative process as success comes into play
  • Soliciting feedback, Justin Boreta-style
  • Describing a day in the studio for The Glitch Mob
  • Commonalities of the most successful songs
  • The importance of traditional instrument skills when performing/producing music
  • Justin plays the never before heard 6th version of Our Demons, followed by the finished product [57:30]
  • A rapid learning program for music production
  • The draft version of Fortune Days, followed by the finished product [1:03:15]
  • How many separate tracks are running in a Glitch Mob song?
  • What percentage of samples are custom vs. off-the-shelf?
  • Current revenue streams for The Glitch Mob
  • Favorite pastry, pre-show meditation, defining success, and advice for his 20-year old self
  • What EDM show should the uninitiated go to first, morning rituals, meditation and morning workouts
  • What is the best piece of advice you’ve ever received? [1:40:20]
  • Justin plays us out with Can’t Kill Us [1:48:45]

People Mentioned

by Ian Robinson at February 23, 2015 10:16 PM

John Udell

Literate programming is now a team sport

In the mid-1980s I worked for a company that squandered a goodly number of tax dollars on a software project for the Army. Do horrors spring to mind when you hear the phrase "milspec software"? It was even worse. My gig was milspec software documentation.

We produced some architectural docs, but our output was dwarfed by piles of that era's precursor to Javadoc: boilerplate extracted from source code. During one meeting, the head of the doc team thudded a two-foot-thick mound of the stuff onto the table and said: "This means nothing to anyone."

[ The art of programming is changing rapidly. We help you navigate what's hot in programming and what's going cold and give insights into the technologies that are changing how developers work. | Keep up with hot topics in app dev with InfoWorld's Application Development newsletter. ]

Meanwhile, we were following the adventures of Donald Knuth, who was developing an idea he called literate programming. A program, he said, was a story told in two languages: code and prose. You needed to be able to write both at the same time, and in a way that elevated the prose to equal status with the code. Ideas were explored in prose narration, at varying levels of abstraction, then gradually fleshed out in code. 

To read this article in full or to leave a comment, please click here

by Jon Udell at February 23, 2015 11:00 AM

February 22, 2015

Giles Bowkett

Rest In Peace, Carlo Flores

This is my third "rest in peace" blog post since Thanksgiving, and in a sense, it's the saddest.

My friend Carlo Flores has died. He lived in Los Angeles, and on Twitter, every hacker in Los Angeles was going nuts with grief a few days ago. None of them would say what happened. I had the terrible suspicion Carlo killed himself, and I eventually found out I was right:

I can't even imagine the pain he must have been in to seek a way out, knowing that it would make all of us miss him this way.

I love you Carlo. I'm going to rock for you today.







by Giles Bowkett (noreply@blogger.com) at February 22, 2015 03:49 PM

Dave Winer

Good fences make good neighbors

This is really a very simple idea, but a hard one for a lot of people to grasp, probably due to the way our culture mystifies personal relationships. In the classic romance, the Other is only thought to really love you if they can anticipate your every need. If they can read your mind. This is actually an infantile version of love. A parent has to be able to discern the needs of an infant who doesn't have language to explain that his or her diaper needs changing, or has gas, or feels vulnerable and needs a hug, or whatever. It's a guessing game, but there's no choice. But once we develop language, you no longer have to guess what The Other is feeling, he or she can tell you.

There's a great scene in one of my favorite movies, As Good As It Gets, where Melvin is doing something very sweet for Verdell, the dog. An observer says "I want to be treated like that." What she's really saying is she misses being a baby. It's a fine feeling, because it really was, for many of us, great being an infant. But it's a bad basis for an adult relationship.

"Good fences make good neighbors" comes from a famous Robert Frost poem, and I think he's being ironic, but it's still true. If you want to really love someone, you have to always recognize that they are a separate person, and unless you ask, you do not know how they're feeling, or what they really mean, etc. The opposite is true. Unless you've said something, clearly, your husband or wife has no idea what you think. It's so boring to have to say everything, but that's how you stay sane and build trust.

Imagine a relationship as a circle, and draw a vertical line through it. On one side, write the other person's name, and write your name on the other side. You stay in your side, and they stay in theirs. You can touch, share, admire each other, tell jokes, share truths, but only from your side of the line. Once you put your presence in their body and start talking about what they think or see, you've just been invasive. It's a form of abuse, a violation, a psychic rape. Most relationships are complete sloshes, with people all over the place all the time, you never know who's where when. No wonder no one trusts each other! You can't trust someone if you never know when they'll show up inside your body, without permission.

Over the years, I've kept friendships with people who are good at this separation. It's how we get to be intimate, because it's safe to do so. The ones that have fallen away are those where the Other thinks they know what you really mean, even if it isn't anything you said or did. As Melvin said in As Good As It Gets, "this is exhausting." Keeping the Other on their side of the line, after a while, gets too much, and you choose to spend your time with other Others.

As Sting sang: "If you love someone set them free."

Same idea.

PS: I've been doing most of my writing last week on my Liveblog. I love it. At this point I think I will migrate my blog over there. Same old story on Scripting News, it's always in motion. For now if you want to keep up on my writing, you have to follow both blogs. There's a new tab on Scripting News with the full contents of the liveblog. So you don't have to travel very far to keep up. And of course the liveblog has an RSS feed.

PSS: This post is not about you.

February 22, 2015 02:59 PM

February 18, 2015

Dave Winer

Scripting News is a feed reader

I like to think my blog is interesting because of what I write here, but it's also interesting in how it works. And this is something, I think, anyone with a little experience in HTML can appreciate.

If you open most web pages of news sites, you'll see a combination of markup, things in <angle brackets> that tell the browser how to present stuff, and the text that makes up the page. Do a view-source on the home page of the New York Times for example.

If you do the same thing on the home page of Scripting News, you'll see something quite different. The content isn't there. Look more closely, near the top, there's a section of script code that says where it is:

var urlRss = "http://scripting.com/essayblog.xml";
var urlRiver = "http://rss.scripting.com/rivers/iowa.js";
var urlLiveblog = "http://liveblog.co/users/davewiner/rss.xml";
var urlCardFeed = "http://littlecardeditor.com/users/davewiner/rss.xml";
var urlFlickrFeed = "http://api.flickr.com/services/feeds/photos_public.gne?id=22221172@N00&lang=en-us&format=rss_200";
var urlAbout = "http://scripting.com/abouttab.opml";

Each tab on the home page displays a feed, each from a different source of content: my main blog, liveblog, cards, Flickr photos, links, river, and an outline with information about the site.

In other words: Scripting News looks like a blog, but it's actually a feed reader.

February 18, 2015 04:17 PM

Ian Bicking

A Product Journal: Building for a Demo

I’ve been trying to work through a post on technology choices, as I had it in my mind that we should rewrite substantial portions of the product. We’ve just upped the team size to two, adding Donovan Preston, and it’s an opportunity to share in some of these decisions. And get rid of code that was desperately expedient. The server is only 400ish lines, with some significant copy-and-paste, so we’re not losing any big investment.

Now I wonder if part of the danger of a rewrite isn’t the effort, but that it’s an excuse to go heads-down and starve your situational awareness.

In other news there has been a major resignation at Mozilla. I’d read into it largely what Johnathan implies in his post: things seem to be on a good track, so he’s comfortable leaving. But the VP of Firefox can’t leave without some significant organizational impact. Now is an important time for me to be situationally aware, and for the product itself to show situational awareness. The technical underpinnings aren’t that relevant at this moment.

So instead, if only for a few days, I want to move back into expedient demoable product mode. Now is the time to explain the product to other people in Mozilla.

The choices this implies feel weird at times. What is most important? Security bugs? Hardly! It needs to demonstrate some things to different stakeholders:

  1. There are some technical parts that require demonstration. Can we freeze the DOM and produce something usable? Only an existence proof is really convincing. Can we do a login system? Of course! So I build out the DOM freezing and fix bugs in it, but I’m preparing to build a login system where you type in your email address. I’m sure you wouldn’t lie so we’ll just believe you are who you say you are.

  2. But I want to get to the interesting questions. Do we require a login for this system? If not, what can an anonymous user do? I don’t have an answer, but I want to engage people in the question. I think one of the best outcomes of a demo is having people think about these questions, offer up solutions and criticisms. If the demo makes everyone really impressed with how smart I am that is very self-gratifying, but it does not engage people with the product, and I want to build engagement. To ask a good question I do need to build enough of the context to clarify the question. I at least need fake logins.

  3. I’ve been getting design/user experience help from Bram Pitoyo too, and now we have a number of interesting mockups. More than we can implemented in short order. I’m trying to figure out how to integrate these mockups into the demo itself — as simple as “also look at this idea we have”. We should maintain a similar style (colors, basic layout), so that someone can look at a mockup and use all the context that I’ve introduced from the live demo.

  4. So far I’ve put no effort into onboarding. A person who picks up the tool may have no idea how it is supposed to be used. Or maybe they would figure it out: I haven’t even thought it through. Since I know how it works, and I’m doing the demo, that’s okay. My in-person narration is the onboarding experience. But even if I’m trying to explain the product internally, I should recognize I’m cutting myself off from an organic growth of interest.

  5. There are other stakeholders I keep forgetting about. I need to speak to the Mozilla Mission. I think I have a good story to tell there, but it’s not the conventional wisdom of what it means to embody the mission. I see this as a tool of direct outward-facing individual empowerment, not the mediated power of federation, not the opting-out power of privacy, not the committee-mediated and developer driven power of standards.

  6. Another stakeholder: people who care about the Firefox brand and marketing our products. Right now the tool lacks any branding, and it would be inappropriate to deploy this as a branded product right now. But I can demo a branded product. There may also be room to experiment with a call to action, and to start a discussion about what that would mean. I shouldn’t be afraid to do it really badly, because that starts the conversation, and I’d rather attract the people who think deeply about these things than try to solve them myself.

So I’m off now on another iteration of really scrappy coding, along with some strategic fakery.

by Ian Bicking at February 18, 2015 06:00 AM

February 17, 2015

Dave Winer

A new tab on Scripting News

A picture named blog.pngThis is so recursive, I can't tell you how confused I am, I can only imagine how confused others are. But it works, so here goes.

I just added a new tab to Scripting News, called Liveblog.

It is based on the RSS feed for my liveblog.

The liveblog, as its name suggests, is faster than Scripting News. It contains notes from my work, ideas that are too long for tweets, or that I want to come back to later.

The liveblog is more casual than Scripting News. Not sure what that says about Scripting News. I've always wanted it to be loose and friendly, but there you go. I am posting regularly to the liveblog now, so if you want to follow me, that should be on my home page too, it seems.

Hopefully at some point there will be a "singularity" event where it all collapses down to one thing. But for now, all my feeds meet up on scripting.com.

And yes, I plan to release the liveblog software. But it's a tricky bit, and I want to get it right before letting it out in the wild. It's can be hard to change software after it's released, and I'm still learning a lot about this.

PS: It seems like today is a good day to take a screen shot of the Scripting News home page. Continuously updated since 1997.

PPS: Thanks to my friend Hugh MacLeod for the excellent cartoon in the right margin of this post, from the early days of blogging.

February 17, 2015 05:03 PM

John Udell

Automation for the people: The programmer's dilemma

"Use the left lane to take the Treasure Island exit," the voice on Luann's phone said. That didn't make sense. We've only lived in the Bay Area for a few months and had never driven across the Bay Bridge, but I'm still embarrassed to say I failed to override the robot. As we circled back around to continue across the bridge I thought I heard another voice. It sounded like Nick Carr saying, "I told you so!"

In "The Glass Cage," Carr considers the many ways in which our growing reliance on automation can erode our skill and judgement. As was true in the case of his earlier book, "The Shallows," which grew out of an Atlantic Monthly article with the sensational title "Is Google making us stupid?," "The Glass Cage" has been criticized as the antitechnology rant of a Luddite. 

To read this article in full or to leave a comment, please click here

by Jon Udell at February 17, 2015 11:00 AM

Dave Winer

myword.io on iPhone 6

A picture named mywordOnIPhone6.png

I think myword.io finally looks good on a phone.

February 17, 2015 12:15 AM

February 16, 2015

Dave Winer

The meaning of life

Yesterday I introduced two people on Facebook, people who I thought would very likely get off on each other's imaginations, humor, positive outlook. So of course I introduced them. I think there's real magic possible in these things.

There are billions of minds on our mote of dust suspended in a sunbeam. Very few of them ever get to engage. Maybe the secret of life is that there is a truth that can only be uncovered if two people connect and share something. They might not even know what it is. One person has a lock, the other a key.

Random things happen leading you to think of other things, and then you end up back where you started. Earlier in the day I had lost a huge file because I was editing it in a buggy app, still in development. Then, on Facebook, a question pops up. Maybe Google might have my lost OPML file in a cache somewhere. That led me on a train of thought back to 2002, when I published a post saying how cool it would be if Google would not only index OPML, but understand it. A very small amount of code would have been needed, but it isn't any code I could have written. They would have had to do it. So I did my best to explain why it would be a great idea. These things almost never happen, either no one is listening, or they think I'm too insignificant to matter (the fallacy of working at a huge company, they forget that people are the same size no matter where they work).

My whole career I've been asking for small favors from platform vendors, almost always being turned down, and then spending 5 or 10 years doing what they do just so I can put the teeny little bit of code in there that I wanted. Often the reason given is security, but it's usually not really that. It's being too busy to listen (I understand, me too, sometimes). Or a perception about the significance of the person doing the asking.

Maybe the two lovely people I introduced on Facebook, in collaboration, will figure out how to make something like this work. I think they're both the kind of people who would just say "WTF let's give it a try" if someone made a suggestion. I watch for that in people. It's the rarest quality, but they're the only people you can actually do stuff with!

BTW it would still be fucking awesome if Google would parse and display OPML. I have a great editor for it. And we need services to run at scale doing intelligent things with these structures. Oh the fun we could (still) have! I never give up.

PS: There were a few times people said yes to these kinds of things and incredible stuff did actually result. One was the NYT and permission to use their content for RSS. Another was Microsoft and XML-RPC, which led to the idea of websites having APIs. Something that's still shaking up the tech world today (though few people talk about it). NPR adopting podcasting is in the same class.

PPS: It happened in the inverse when Adam Curry tried repeatedly to get me to build a framework for podcasting in Frontier (long before it was called podcasting, btw). My mind was pretty closed to the idea, at first. But to both our credit, he persisted and eventually I heard what he was saying.

PPPS: A few days ago I wrote about awards for technology. Let's add another award. Best combination of ideas. Don't just reward people for being brilliant or brave, reward them for working with other people. The more diverse the interests, the more rewarding.

February 16, 2015 07:03 PM

February 13, 2015

Decyphering Glyph

According To...?

I believe that web browsers must start including the ultimate issuer in an always-visible user interface element.

You are viewing this website at glyph.twistedmatrix.com. Hopefully securely.

We trust that the math in the cryptographic operations protects our data from prying eyes. However, trusting that the math says the content is authentic and secure is useless unless you know who your computer is talking to. The HTTPS/TLS system identifies your interlocutor by their domain name.

In other words, you trust that these words come from me because glyph.twistedmatrix.com is reasonably associated with me. If the lock on your web browser’s title bar was next to the name stuff-glyph-says.stealing-your-credit-card.example.com, presumably you might be more skeptical that the content was legitimate.

But... the cryptographic primitives require a trust root - somebody that you “already trust” - meaning someone that your browser already knows about at the time it makes the request - to tell you that this site is indeed glyph.twistedmatrix.com. So you read these words as if they’re the world according to Glyph, but according to whom is it according to me?

If you click on some obscure buttons (in Safari and Firefox you click on the little lock; in Chrome you click on the lock, then “Connection”) you should see that my identity as glyph.twistedmatrix.com has been verified by “StartCom Class 1 Primary Intermediate Server CA” who was in turn verified by “StartCom Certification Authority”.

But if you do this, it only tells you about this one time. You could click on a link, and the issuer might change. It might be different for just one script on the page, and there’s basically no way to find out. There are more than 50 different organizations which could certify that could tell your browser to trust that this content is from me, several of whom have already been compromised. If you’re concerned about government surveillance, this list includes the governments of Hong Kong, Japan, France, the Netherlands, Turkey, as well as many multinational corporations vulnerable to secret warrants from the USA.

Sometimes it’s perfectly valid to trust these issuers. If I’m visiting a website describing some social services provided to French citizens, it would of course be reasonable for that to be trusted according to the government of France. But if you’re reading an article on my website about secure communications technology, probably it shouldn’t be glyph.twistedmatrix.com brought to you by the China Internet Network Information Center.

Information security is all about the user having some expectation and then a suite of technology ensuring that that expectation is correctly met. If the user’s expectation of the system’s behavior is incorrect, then all the technological marvels in the world making sure that behavior is faithfully executed will not help their actual security at all. Without knowing the issuer though, it’s not clear to me what the user’s expectation is supposed to be about the lock icon.

The security authority system suffers from being a market for silver bullets. Secure websites are effectively resellers of the security offered to them by their certificate issuers; however, the customers are practically unable to even see the trade mark - the issuer name - of the certificate authority ultimately responsible for the integrity and confidentiality of their communications, so they have no information at all. The website itself also has next to no information because the certificate authority themselves are under no regulatory obligation to disclose or verify their security practices.

Without seeing the issuer, there’s no way for “issuer reputation” to be a selling point, which means there’s no market motivation for issuers to do a really good job securing their infrastructure. There’s no way for average users to notice if they are the victims of a targetted surveillance attack.

So please, browser vendors, consider making this information available to the general public so we can all start making informed decisions about who to trust.

by Glyph at February 13, 2015 08:52 PM

Dave Winer

Doors and extra bedrooms

Explaining myword.io yesterday, to Doc, in a tweet: "Ev built a nice house but didn't put a door on it. So I built one so I could have a door." This happens a lot in software, it turns out. Then I tweeted "This happened with Matt Mullenweg's house too. I needed a door, he wouldn't add it, so I had to build the whole house just to get the door."

That was necessarily abbreviated to fit in 140 chars. More accurately, I needed an extra bedroom for each post so the source code for the post could be stored alongside the rendered version.

Why the source code?

So what really happened without all the metaphors?

In the case of WordPress, let's say you want to make a great blog post editor, but you don't want to have to write a whole blogging system, or you want to let people use it with WordPress which is incredibly popular.

So the user creates a post, saves it, we render it, then send it to the blogging platform, WordPress. The author makes some changes, and it's still good, we just tell WordPress to update the post. But what happens if two weeks from now, the editor is long-closed, or the user is on another system, and they need to update the post? No can do. Because I don't have the source code for the post.

I can get the rendered version from WordPress, but we were working at a higher level. If we had had a place to put the source, alongside the rendered version, we would have been in business.

I asked, nicely I hope

I asked that WordPress allow me to store a bit of data along with the post. They did consider the idea, but explained that this might create security issues, so they couldn't do it. So I build a whole CMS, so I could have the editor I wanted. This post is written using that CMS.

The missing front-door on Medium

Re Ev's product, Medium -- I wanted a front door so I could hook their great rendering engine into my writing environment. I really shouldn't be doing what I'm doing with myword.io, they should just provide an easy way for other tools to hook in.

What I have is 90 percent of the effect people are looking for, and it's good for the web. An ecosystem could develop around it. I know Ev doesn't believe in software ecosystems. Fine. Let's see if he's right.

I know there are great CSS guys out there who don't work for him. myword.io is open source, so anything can happen. And maybe writers will help this part of the web stay free from Silicon Valley siloization. I know it's a dream, but I'm a dreamer.

February 13, 2015 06:18 PM

It's time to finish Twitter

Tweets are objects

Twitter calls them tweets, but in programming terms, they are objects.

Objects have attributes. If an object has a picture attached to it, that should not be represented as part of the text attribute of the object. There should be a separate attribute for each picture. Yet, totally for historic reasons, that's how it works in Twitter.

Note that in the API they are objects. It's just in the UI that they aren't. That seems wrong.

Hyperlinks are mature technology, let's use them

When Twitter was invented we already had conventions for embedding addresses in text. They are called hyperlinks. I think Twitter's designers should decide if they want to have hyperlinks, and if so they should support them the way the web does. There was no good reason to make it work differently. I guess at the time Twitter had problems scaling their servers, so that's where their attention was focused. But those days are behind us.

Neaten up tweets, get rid of the dangling URLs

Seriously, the onboarding process of Twitter is made more difficult and confusing for people because of all those URLs dangling all over the place. When showing a non-technical user Twitter you have to tell them to ignore those things. Or try to explain why some tweets have two URLs and what they mean. It's 2015. It's almost ten years. Wouldn't it be great to finish this product, really, before it turns ten?

Make images an attribute (actually in some cases I think they are). Or allow text to be hyperlinked. Even better: do both.

140 character limit? Give it up

And while you're at it, let us put more than 140 chars in a tweet. Yes I know the drill, it's supposed to be better if people are forced to be concise. To which I say "Have you looked at your timeline recently?" There are lots of ways to not be concise, and people are using all of them. Give it up. It's over. And btw there are plenty of important, good ideas that require more than 140.

Hashtags are intimidating, they could be easy, even fun

Another thing: It should be easy to ask Twitter what a given hashtag means. If necessary use Mechanical Turk to implement this. In this area, Twitter is mysterious and difficult for newbies and experts alike. These are easy fixes. At some point Twitter stopped improving. Use some of the resources you have to fix them! It's long overdue.

Hashtags are not some obscure technical artifact, as they were when Twitter started. They are now part of human language. Look at ads on buses if you don't believe me. I think baseballs have hashtags on them now!

If Twitter made hashtags easy people would cheer hooray! when Dick Costolo walked into a room. (Not that they don't already.) Hey you could even make the definitions funny.

February 13, 2015 04:16 PM

The Dude hates The Eagles

This kind of shit happens all the time.

February 13, 2015 03:52 PM

David Carr

We met on October 16 last year at a theater district bar in NYC.

It was a getting-to-know-you meeting. Lively conversation, even though our points of view were very far apart. I liked him. It's so sad that he's gone.

Whenever a Carr column popped up in my river I read it from beginning to end.

February 13, 2015 03:50 AM

February 12, 2015

Dave Winer

Something fun I whipped up

I have a new writing tool I'm working on and wanted it to be easy to create beautiful essay pages from it, like Medium. But Medium doesn't have an API. So I made an essay-viewer that has one.

http://myword.io/

Now, I'm not a great CSS hacker. That's obvious. But all that's needed is a framework to get started. That's the idea.

I'm going to keep iterating over it. I really like the way it feels. And if you have any ideas on how to improve it, please feel free to fork.

How to make an essay of your own

  1. It's not pretty.

  2. You have to type in a technical language called JSON. It makes sense and it's pretty simple, but it's also exacting. If you misplace a quote or a comma it will fail. If you've been thinking you want to learn to code, this would be a very easy way to start. (Another way of saying that programming isn't easy.)

  3. You must have a way of storing a file in a public place with an http:// address. I like to use the Public folder in Dropbox.

  4. Start with one of the example JSON files, for example this one.

  5. Edit it, and upload it. Suppose the new address is this: http://someaddress.com/my.json

  6. You can get myword.io to display it with this URL:
    http://myword.io/url=?http://someaddress.com/my.json

That's really all there is to it!

An example that works

http://myword.io/?url=http://liveblog.co/misc/demo.json

February 12, 2015 06:58 PM

No one hears the anti-vaxxers

Note: Small spoiler re Boyhood movie at the very end.

I read a story on Joey de Villa's blog yesterday about an over-55 community in Florida where people have wild consensual sex all the time. Being over 55 myself I have to say it sounded interesting.

I'm getting ready for something different. I like programming, and I'll keep doing it for sure, as long as my right arm holds up (I'm having pretty radical RSI pain right now). But I'm not trying to change the world, or make a billion dollars. I have no expectation that I can change the world, and no great ideas on how to do it either. And I don't need more money than I have saved, I think. The stock market been berry berry good to me. (You have to be of a certain age to get that joke.)

Anyway the retirement community story said something. They've thought of everything by now. There isn't room to be innovative in lifestyle anymore. Back when I was a kid, that's what we thought we were doing. Creating something new in life. But the hippies are either gone or very old. Did you see what Joni Mitchell says about Bob Dylan? She looks like my grandmother! There's just a cranky old person being cranky. No the hippies didn't really break through in lifestyle. We all got old anyway.

Here's the thing: it's got to be even worse for young people with kids today. That feeling that everything is all figured out. There isn't any need for me to think up anything creative to do with my life. If I try, I just find out that someone thought of it long ago, and it has a price tag. So you look for little ways you can be different. Unfortunately not vaccinating kids turned out to be one of them. It could have been not eating cheese, something harmless. But it was vaccinating kids.

I thought Obama understood this, btw, and had ideas for how to involve all of us in making the world work better. If the US can elect a black president, can't we lead the world in making a difference, as people? Nah, he didn't get any big ideas until Year 6 of his presidency, when it's almost too late. And his big ideas didn't turn out to be very great either. Just that he could be an asshole to the Republicans and they might relate to that, appreciate that, respect that. It was true back at the beginning and he should have been doing that all along. And it would have been nice if each of us had a way to feel our life had meaning, that we were making a difference, and being creative, heard and understood -- by someone, anyone. I think that's the real crisis of our times, in the first world at least.

The mom in Boyhood said it best: "I just thought there would be more."

February 12, 2015 02:27 AM

February 10, 2015

Reinventing Business

Money is an Abstraction of Time

My friend Nancy sent me this in an email, and allowed me to post it here:
Money is basically an abstraction of time.  Time is real (to our physical, corporeal selves), money is imaginary.  Sort of a 'willing suspension of disbelief' construct that makes the material side of modern life function.  But at a certain point, money, like Newtonian physics, breaks down.  Because everyone knows that on a personal level, time is a relentless pursuer.  Material stuff is evanescent. Remember back when Adobe put all of the engineers' names on the splash screen of a project?  The business psychology of that was that ownership was important to creators.  But really, underlying that idea is something that looks like a straight across trade: time for time.  Chris told me recently that there is a saying that you don't die until after the last time your name is spoken. Offering a creative person money in exchange for being buried alive in a cubicle isn't any kind of trade.  It's more like indentured servitude.  Pile a bunch of non-disclosure/non-compete agreements onto that and it looks more like slavery...
I also see money as a social agreement. It is a way to transport value, to trade the value I produce for the value you produce. For this reason I find the ways that bankers and stock market traders come up with to steal this value particularly reprehensible; it only takes value out of the system without adding anything (caveat: not everything bankers and stock traders do falls under the "stealing" category, of course; many of them provide real value).

by noreply@blogger.com (Bruce Eckel) at February 10, 2015 10:29 PM

Dave Winer

We could have an open, user-controlled, ad-free Facebook

The other day my dear friend NakedJen was waking up to the power Facebook has because we use their system. She saw an endorsement by her friend, in the right margin on Facebook, of a product. It had her picture on it. She wondered if her friend had been paid for the endorsement, or even consulted. While I don't know for sure, I think the answer is "neither." Facebook has the right to do that. I'm sure it's in the user agreement. Which we all agree to, or we wouldn't be using Facebook.

The conversation continued.

I told her that I had stayed off Facebook for years because I didn't want to appear to endorse this system. But eventually the battle was lost, and my holding out wasn't accomplishing anything other than cutting me off from a social phenomenon. If I wanted to develop software in a post-Facebook world, if I didn't understand Facebook, my software would be missing an important historic precedent. Facebook exists, and nothing I can do can change that. So I might as well join the party, and I did, and no regrets.

Thing is, we've already ceded this kind of power to Google with video via YouTube. I heard a report on NPR on Sunday that was very depressing, an interview with a musician saying that YouTube had given her an agreement, take it or leave it, that said either you sign everything over to us, or you can't be part of YouTube. I didn't get the full story, just the gist. She said she thought the Internet was going to free us from the music industry. But it didn't do that. The music industry has rebooted, on the Internet. The money just flows to different bank accounts now.

You can see the process of ceding control happening right now, as essayists post their stories on Medium instead of their own blogs. There seems to be an assumption that you get more flow if you do this. I kind of doubt it. But even if there were more flow, you're ultimately forcing all of us to accept a deal that's probably going to be as bad as or worse than the one Facebook and YouTube have given us. Why? Because the lawyers and entrepreneurs of tech are learning, and they're getting better at grabbing, and users are not acting in their self-interest, any more than they were when Facebook and YouTube were taking over.

But even today it's not too late. Because economically and technically, we could reproduce what we have on Facebook on open systems, where everyone controls their own space, without signing over the kinds of rights that make users feel used. We saw some of the enthusiasm when Diaspora launched a few years ago, but they were college students, and weren't realistic about how to bootstrap such a system. You might say But Zuck was a college student when he booted up Facebook. But that system got to grow slowly, and their mistakes weren't exposed so quickly because they were small when they happened. If you wanted to boot up something that would do what Facebook does, today, you'd have to be prepared for a much bigger user community, almost immediately.

But what is Facebook, really? Where is the value in it, and is that so hard to reproduce? Seems to me it's basically a discussion board with a network data structure called "the graph." It's FriendFeed 2.0 (the current Facebook was designed by the creator of FriendFeed). There doesn't appear to be any rocket science in there. Maybe there is and I'm missing something. (There's no mystery to a graph. When I was a math undergrad I studied graph theory and wrote software that processed graphs. Long before there was a Facebook.)

Economically, there are huge economic resources that can be marshalled by users pooling their money. This isn't speculative, the money does flow. And I don't think each Facebook user consumes all that much in the way of computing and storage resources. $100 a year perhaps? Would you be willing to pay that to control your online destiny? You probably pay half that each month to your ISP.

It seems to me that all that's needed is the will to do it. By a few developers, and by a few users, to get a bootstrap started.

I'm not advocating anything. This isn't a proposal of any kind. But I thought about this the other day and asked myself the question -- is it possible? And I decided it is possible. So I thought, being a blogger and a developer and a user, as I am, that I should say that.

February 10, 2015 03:05 PM

February 09, 2015

Tim Ferriss

TF-StitcherButton

matt-mullenweg-wordpress-automattic

Matt Mullenweg has been named one of PC World’s Top 50 People on the Web, Inc.com’s 30 under 30, and Business Week’s 25 Most Influential People on the Web.

In this episode, I attempt to get him drunk on tequila and make him curse.

Matt is most associated with a tool that powers more than 22% of the entire web: WordPress. Even if you aren’t into tech, there are many pages of “holy shit!” tips and resources in this episode.

Matt is a phenom of hyper-productivity and does A LOT with very little. But how? This conversation shares his best tools and tricks. From polyphasic sleep to Dvorak and looping music for flow, there’s something for everyone.

Last but not least, Matt is also the CEO of Automattic, which is valued at $1-billion+ and has a fully distributed team of 300+ employees around the world. I’m honored to be an advisor, and I’ve seen how they use incredibly unorthodox methods for jaw-dropping results.

But… he started off as a BBQ-chomping Texas boy with no aspirations of empire building. How on earth did get here? Just listen and find out. It’s one hell of a story.

TF-ItunesButton TF-StitcherButton

###

This episode is sponsored by OnnitI have used Onnit products for years. If you look in my kitchen or in my garage you will find Alpha BRAIN, chewable melatonin (for resetting my clock while traveling), kettlebells, maces, battle ropes, and steel clubs. It sounds like a torture chamber, and it basically is. A torture chamber for self-improvement! Ah, the lovely pain. To see a list of my favorite pills, potions, and heavy tools, click here.

This podcast is also brought to you by 99Designs, the world’s largest marketplace of graphic designers. Did you know I used 99Designs to rapid prototype the cover for The 4-Hour Body? Here are some of the impressive results.  Click this link and get a free $99 upgrade.  Give it a test run..

QUESTION(S) OF THE DAY: What’s the best productivity tip or tool you’ve implemented in the last year? Please let me know in the comments.

Scroll below for links and show notes…

Enjoy!

Selected Links from the Episode

Show Notes

  • How WordPress started | The origin story
  • Defining “open source”
  • How WordPress beat their competition and how to beat the complicate-to-profit business model
  • The long term outlook and core product characteristics that has empowered the growth of WordPress
  • Describing Automattic, and how it was founded with a purpose to kill spam
  • Experiments in polyphasic sleep, girlfriend complexities, and Dvorak typing
  • How is Automattic differs from the average tech startup, and challenges of a distributed workforce
  • Thoughts on where to draw the transparency line when running an open-source company
  • Delving into the secret benefits of tequila
  • Matt Mullenweg’s useful laptop and smartphone apps
  • Turning it around on Tim: Intermittent fasting and distilled water fasting?
  • Overworking vices, creating “de-loading” phases and saying “no” to meetings
  • Why we don’t care about the color of the bike shed
  • Musical skills that support coding and other leadership skills
  • Why Matt listens to familiar songs on loop when working
  • Hiring tips: Auditions at Automattic, why use them, and how they work
  • Matt’s view on top-grading
  • Most gifted books
  • Learning to love running
  • Answering Twitter questions: Bootstrapping vs. seed money if starting in 2015, picking a badass suit and last great purchase for less than $100
  • Packing tips
  • The story of losing an investor’s check (nearly a $400,000 mistake)
  • The story behind eating 104 Chicken McNuggets
  • First person to come to mind when you think “successful”?
  • Suggested investing books
  • The role WordPress will play in online content outside the browser (mobile apps, API, etc.) in the near future
  • Books and resources for the 20-year old entrepreneur looking to start a company
  • Stranded on a desert island? Albums and what else?
  • Advice for your 20-year old self?

People Mentioned

by Ian Robinson at February 09, 2015 10:22 PM

Dave Winer

Honoring developers and products

This began as an outline on my liveblog, but I felt the idea deserved more attention.

The Crunchies are a product of what I call the VC-based tech industry. But that's not the only tech industry. There's so much more going on here than bankers and advertising. I started making a list of awards I'd like to see, here it is, with some ideas.

Open format and protocol of the year

  • Past honorees would include HTTP and BitCoin, as examples.
  • Hackathons please include these among your commercial sponsors' APIs. It's important to build around open formats and protocols too.

Hall of Fame

  • People or products that influenced all that came after. We have a lot of catching up to do here. For me, the big ones are the C programming language, the PDP-11 machine architecture, Unix, Visicalc, 1-2-3, the Macintosh and IBM PC, Mosaic, Flickr, Twitter.
  • Like sporting halls of fame, we should also include leaders who made a difference, and journalists who covered our work intelligently and with care.

Giving back

  • Giving money to hospitals is great. But let's honor technologists who got rich and gave back to the ecosystem that produced the technology that made their work possible. This would be pure philanthropy, not embrace and extend. There is very little of this, but awarding it would create incentives for there to be more.

User trust

  • What company gave the users freedom to switch. The bigger the trust, the more we want to honor you.

Best commercial API

  • We hardly ever compare them. Last year I discovered there was a huge diff betw Twitter's and Facebook's APIs.

February 09, 2015 06:05 PM

John Udell

Wiki creator reinvents collaboration, again

Almost 19 years ago, when I was Byte magazine's executive editor and Web developer, I received the following email:

From: Ward Cunningham
Subject: wiki
Date: May 22, 1996 at 11:23:10 AM PDT
To: Jon Udell
Jon -- So what do you think of wiki? I put it up a year and a half ago 
when HTML authoring tools were in short supply. I think it has held up 
pretty well, though I suppose its days are numbered. Still, there is 
something about it we call WikiNature that is in short supply on the net 
and in computers in general. Regards. -- Ward
[ See what hardware, software, development tools, and cloud services came out on top in the InfoWorld 2015 Technology of the Year Awards. | Cut to the key news in technology trends and IT breakthroughs with the InfoWorld Daily newsletter, our summary of the top tech happenings. ]

What I thought when I saw that wiki was that maybe I should stay quiet about it. The Portland Pattern Repository was a collaborative effort to define the practices and principles that inform the agile software movement. And the wiki Ward mentioned -- the world's first! -- was the petri dish in which that culture grew. It was wide open. Anyone could edit a page. Those who did conversed eloquently and distilled a set of memorable patterns:

To read this article in full or to leave a comment, please click here

by Jon Udell at February 09, 2015 11:00 AM

Blue Sky on Mars

This Uncanny Valley of Voice Recognition

Ever since Star Trek first aired, we've held unreasonable expectations over computers. Not only do we expect them to work for free, but we expect them to listen to all of our problems.

Hey Siri, what’s the temp?
Calling “mom”.
No. Cancel.

Siri, cancel. Stop. Siri stop. Siri eat a dick.
Hello?
Hi mom! Uh, I’ve missed you!

We’ve reached the Uncanny Valley of voice recognition, and everyone except these fucking computers knows what I’m talking about.

Canniness

The Uncanny Valley is a term that originated from the computer animation industry. In 1992, while finishing A Bug’s Life, Pixar had to build a digital valley for Buzz Lightyear to drive his Ford® F-150™ pickup through on the way to the hospital so he could get a vasectomy. Pixar’s staff found that the valley looked uncanny, meaning it looked good, but not perfect. They ended up illustrating a crate of Campbell’s® Tomato Soup™ in the corner to make it feel a bit more canny.

The concept of The Uncanny Valley was born. If an emulation doesn’t quite match up to how a behavior works in the real world, humans can easily pick up on this discrepancy and they’ll get pissed off and start an angry hashtag on Twitter.

This never worked until now

Look, we all grew up with those ads in our Yahoo! Internet Life magazine subscriptions for that Dragon's Den voice recognition software thing that never fucking worked but they still inevitably got testimonials from lawyers in blue power shirts who beamed through their dorky microphone headsets while they happily dictated like a robot the winning litigation strategy for their client who nonetheless got caught soliciting a federal agent for counterfeit laundry detergent and also some sexual favors but they’ll get off because they “have a fraternity brother in Justice who can pull some strings”.

Yahoo! Internet Life

We laughed at that back then because we had some real dope inventions at the time: like, keyboards and shit. Have you tried one? You can put words on a screen real fast compared to flapping words through your oral meatflaps. It’s also beneficial while you’re writing your steamy 50 Shades of Grey fan fiction in your public library. I mean, to make it less creepy you could try whispering the words to your computer instead of loudly dictating them, but are you sure that doesn’t make it more creepy?

Voice recognition used to be a gimmick.

Things done changed

The last year or two have seen some real mass-market changes, though. Apple has Siri, Google has OK Google or something, Amazon has an entire standalone device in Echo, and Microsoft is pushing for voice control in Xbox One with Kinect.

The difference is that these things finally do some cool stuff. We’re not dictating litigation strategy to our secretaries; we’re interacting with our devices in real ways. It kinda blew my mind that I can walk into my living room, say “Xbox on”, and my Xbox turns on, my TV gets switched on, the input get changed over to my Apple TV, and it’s all ready to watch by the time I reach my chair.

Voice intelligence

The problem is we’re at this uncanny valley. We want to talk to our devices like humans, but they still act like toddlers wearing headphones who only speak Portuguese or something.

If I’m playing music in the background, my Xbox has a tough time identifying what I’m saying. It’s not a mistake a human would readily make.

If you ever end up in bed with someone (congrats!) with both your iPhones plugged into the wall and one of you wakes up and asks “Hey Siri what’s the weather like today”, you now have two iPhones — in addition to any iPads laying around — dutifully responding at the same time. iPhones understand words, but they don’t understand you. It’s not a mistake a human would readily make.

When voice recognition works, it’s great, but when you have to repeat yourself or it just doesn’t understand you, the level of frustration feels much higher than other software. Why can’t you understand me, you dumb robot?

Words as UI

Part of this frustration is the user interface itself is less standardized than the desktop or mobile device UI you're used to. Even the basic terminology can feel pretty inconsistent if you’re jumping back and forth between platforms.

Siri aims to be completely conversational: Do you think the freshman Congressman from California’s Twelfth deserved to sit on HUAC, and how did that impact his future relationship with J. Edgar?

Xbox One is basically an oral command line interface, of the form:

Xbox <verb> (direct object)

For example: Xbox, go to Settings. But this is not the case if it’s in “active listening” mode, in which case you drop the Xbox and attempt to address it conversationally (go to Settings). But you can’t really converse with it, because it’s functionally less capable than Siri or Google Now. The context switching is a little frustrating. On the other hand, since it's so cut-and-dry, there's less of an uncanny valley here because I don't personify my Xbox as much as I do Siri; my Xbox just responds to commands. Funny how a different voice UI here results in a totally different experience.

Amazon Echo

Amazon Echo’s UI is similar to Siri’s conversational form, although you’re almost always going to invoke it by saying Alexa (whereas you can bypass Hey Siri by holding down the Home button and talking normally — a beneficial side effect of having the device in-hand).

There’s good reasons for all these inconsistencies — Xbox, for example, benefits from clear, directed dialogue because there’s fewer functions you require once you’re sitting in front of a TV. But it’s these inconsistencies that are frustrating as you jump back and forth between devices. And we’re only going to scale this up, particularly at home, because again, when this all works it’s awesome. I want to control entire workflows in my home by voice- hey I’m heading out might turn down my thermostat, turn off my lights, and check that my oven’s turned off.

It took decades before computing settled on the standard concepts of the GUI: the desktop metaphor, overlapping windows, scrollbars, and so on. Hopefully voice UI catches up and standardizes, too.

it’s gon b creepy

Voice recognition, if it ever crosses the chasm of the uncanny valley, is going to have to get much smarter. And I don’t mean preprogrammed jokes about where to hide a dead body, but our voice assistants are going to have to start learning about us. Building a relationship with us. Knowing us.

And that’s going to be creepy. Especially if we don't trust who's on the other end of the line. Maybe.

Her (2013)

I think the reason people liked Her (2013) so much was that it didn’t seem all that creepy. It seemed like you're gaining a friend. And it’s going to be weird at first, since it’s going to need to be always-on, always listening, and always learning from you. But if we can ever jump past this uncanny valley, that’s where we’ll basically build AI, for all intents and purposes, and we’re going to have a friend following us around. And it’s going to make life better for us.

Well, depending on which science fiction you watch. We could all end up depressed or die, too. Hug a fan of Black Mirror or Transcendence (2014) today.

We're probably still a long, long way from crossing the Valley to our utter doom or sublime utopia, since computers are hard and voice recognition is apparently really hard. (Or at least that's what I assume; I just do fake computer science like User.find() so I wouldn't know, myself.) We're going to have to deal with this uncanny valley in the meantime. That's a little frustrating, but hey, at least we don't have to wear boom microphone headsets anymore.

Exciting stuff is afoot.

February 09, 2015 12:00 AM

February 08, 2015

Dave Winer

Help with Twitter metadata?

Here's a link to my new liveblog software (still very much in development).

Try viewing that link in Twitter's card validator.

I see an error that isn't help me to figure out what's wrong.

ERROR: FetchError:exceeded 4.seconds to Portal.Pink-constructor-safecore while waiting for a response for the request, including retries (if applicable) (Card error)

If you have experience with Twitter card metadata, do you have any ideas about how I can get this working? I'd really like to have links look as good in Twitter as they do in Facebook. (That's a requirement, not a like.)

Thanks in advance!

I'm trying to think but nothing happens!

Update -- found the problem

Marco Fabbri confirmed the metadata was valid by storing it on another server and it validated. He said to check if I was getting a call from twitterbot. His theory was my Heroku server was blacklisted because of another app running on the same physical machine. This theory turned out not to be correct, but it led me to the fix.

  1. I was getting a request from Twitterbot, for /robots.txt.

  2. My little server app wasn't doing anything special for that file, and it thought it was a twitter user, so it tried to fetch its reader app, opml file, etc, and got lost.

  3. Twitter got a timeout for the /robots.txt call.

  4. It barfed.

The fix.

  1. Add a special case for "/robots.txt" and return a 404.

Result -- twitter love!

February 08, 2015 06:28 PM

February 07, 2015

Dave Winer

The unedited voice of a person

People use blogs primarily to discuss one question -- what is a blog? The discussion will continue as long as there are blogs.

It's no different from other media, all they ever talk about is what they are. We got dinged by the NY Times because all bloggers talked about at the DNC was other bloggers. But what were they busy doing -- talking about other reporters, except when they were talking about bloggers -- talking about bloggers.

Nothing wrong with it.

In the early days we joked that they were watching us watch them watch us watch them. And so on.

In 2003, when I was beginning my stint as a fellow at Berkman Center, since I was going to be doing stuff with blogs, I felt it necessary to start by explaining what makes a blog a blog, and I concluded it wasn't so much the form, although most blogs seem to follow a similar form, nor was it the content, rather it was the voice.

If it was one voice, unedited, not determined by group-think -- then it was a blog, no matter what form it took. If it was the result of group-think, with lots of ass-covering and offense avoiding, then it's not. Things like spelling and grammatic errors were okay, in fact they helped convince one that it was unedited. (Dogma 2000 expressed this very concisely.)

Do comments make it a blog? Do the lack of comments make it not a blog? Well actually, my opinion is different from many, but it still is my opinion that it does not follow that a blog must have comments, in fact, to the extent that comments interfere with the natural expression of the unedited voice of an individual, comments may act to make something not a blog.

We already had mail lists before we had blogs. The whole notion that blogs should evolve to become mail lists seems to waste the blogs. Comments are very much mail-list-like things. A few voices can drown out all others. The cool thing about blogs is that while they may be quiet, and it may be hard to find what you're looking for, at least you can say what you think without being shouted down. This makes it possible for unpopular ideas to be expressed. And if you know history, the most important ideas often are the unpopular ones.

Me, I like diversity of opinion. I learn from the extremes. You think evolution is a liberal plot? Okay, I disagree, but I think you should have the right to say it, and further you should have a place to say it. You think global warming is a lie? Speak your mind brother. You thought the war in Iraq was a bad idea? Thank god you had a place you could say that. That's what's important about blogs, not that people can comment on your ideas. As long as they can start their own blog, there will be no shortage of places to comment. What there is always a shortage of, however, is courage to say the exceptional thing, to be an individual, to stand up for your beliefs, even if they aren't popular.

I sat next to Steven Levy the other night at dinner in NY. He volunteered that in his whole career he had never written a word that wasn't approved of by someone else, until he started a blog. I applaud him for crossing the line. I give him a lot of credit for writing without a safety net. It really is different. Comments wouldn't make the difference, what makes the difference is standing alone, with your ideas out there, with no one else to fault for those ideas. They are your responsibility, and yours alone.

For me, the big rush came when I started publishing DaveNet essays in late 1994. I would revise and edit, for an hour maybe more, before hitting the Send button. Once I did that, there was no turning back. The idea was out there, with my name on it. All the disclaimers (I called the essays "Amusing rants from Dave Winer's desktop") wouldn't help, if the ideas were bad, they were mine. But if they were good, they were mine too. That's what makes something blog-like, imho.

Note: This is a re-run of a post from 2007. On-topic in light of the "blogging is dead" debate. This is what a blog is, imho. DW

February 07, 2015 04:34 PM

Semantic Programming

contenteditable="wow!"

In the last month, I've been catching up with where HTML5/CSS/JavaScript have gotten to, and wow, how did I miss the introduction of contenteditable="true" !!

I was reluctantly re-looking at the web as the provider of the graphical front-end after realising that it really did best fit my three main criteria for the UI: 1) independently threaded client side rendering; 2) wide adoption across many client devices; and 3) general expressiveness of possible UIs.

However, I was doing this reluctantly as there are many aspects of the original 'post-back' model of web interactions that I have grown to detest and have generally felt that the JavaScript 'solutions' to fix this were like a sticky tape and string hack on-top of an out of date model for UI construction.

But most of all I dreaded the imagined process of building a credible WYSIWYG editing experience. 

Somehow I assumed that all of the tools that achieve this on the web today (Google Docs etc) were doing some kind of complex, proprietary rendering and overlays approach using JavaScript to create the impression of dynamically editing a document. I had imagined all sorts of horrors like JavaScript timers drawing the caret and removing it to give the impression of it blinking, etc.

Not for a moment did it occur to me that 'they' might have built WYSIWYG editing into the web! I started looking at all sorts of WYSIWYG plugins and libraries and unpicked the code in each trying to learn the complex magic that underpinned all this imagined wizardry until I realised that all the (recent!) such editors are relatively simple and rest on contenteditable="true"

And, most importantly, this feature has been widely adopted and supported and so it really is a credible route to go for building an online content editor. So, now, despite the web's other imperfections, my attitude has switched 180 degrees and I am really enjoying building a prototype web based Semprola editor!

Indeed, I now cannot imagine the horror of building the UI any other way!

by noreply@blogger.com (Oli) at February 07, 2015 02:50 PM

February 06, 2015

Biosimilarity

Why Synereo?

A new decentralized, distributed social network is emerging, and naturally people are curious. Gideon Rosenblatt asks a key question: Why Synereo instead of Facebook?


i provide an answer here in six basic points that cover architecture and information flow consequences for resiliency, autonomy, and privacy, as well as important aspects of the user experience, user compensation and the attention economy.


TL;DR: you are the network, not the product.

1) A distributed, decentralized architecture is more resilient against certain kinds of attacks. From a hacker hacking a centralized service and scooping up all the credit card data from a single database to a government shutting down a service providing information counter to the incumbent narrative, a distributed, decentralized architecture is more resilient against these kinds of attacks. Clearly, this is not the architecture that Facebook enjoys or promotes. It can't because its revenue model wouldn't work well in this setting.

2) Above this architecture, Synereo provides a qualitative notion of identity. This is not about identity as token because that notion of identity doesn't serve individuals who have rich internal multifaceted presence. What is needed is to be able to reveal enough about oneself to participate legitimately in a conversation without necessarily revealing information that is either sensitive or irrelevant. Consider a self-governance or participatory democracy process like the budget games. Were these conducted in an online situation people might be less likely to participate if the games revealed information about their political views that made them the target of unwanted attention. In the US such issues have included healthcare reform and gay marriage. On the other hand, a government representative needs assurances that the online participants are actually registered voters in their districts and not bots. Synereo allows users to reveal enough identity-related information to participate in a variety of processes without necessarily provding personally identifiable information. If someone is pursuing employment from an employer that is ok with them working remotely, then the employment application process doesn't have to reveal the candidates' residential address. Again, this approach doesn't fit with Facebook's revenue model.

3) Synereo makes strong guarantees about never letting Synereo code see user data in the clear (unless users' give explicit permission). Despite these guarantees, Synereo provides a sophisticated search mechanism that allows users to search content throughout the slice of the network visible to them. Facebook just rolled out a graph search. Many people are very concerned about what this means about privacy. If you would like to know more about how Synereo achieves this, awesome! Please contact me and i will personally explain the mechanism in as much detail as you would like. Beyond this Synereo is based on a mathematical model of decentralized and distributed computing that allows for the specification and enforcement of information flow policies. The language for these policies is exceedingly rich -- considerably richer than friends, friends-of-friends, etc. These policies can be autonomously developed and assembled into larger policies. The polices can be checked for desirable properties. This provides the basis for smart contracts that allows for just-in-time assembling of services from subcontractors. In other words, Synereo can help groups assemble temporarily to form an organization to complete a task or provide a service. This level of sophistication just doesn't exist in Facebook. It doesn't even exist in Ethereum. The Synereo white paper describes an example of information flow policies as relates to public health and public safety. We believe Synereo's feature set makes it an ideal mechanism for participatory self-governance.

4) Synereo provides the attention economy model. This allows participants in the network to begin to get some of the reward for their participation. It's not just that the monetization of attention deployed in social networks is commonly estimated in the 10's of billions of USD. Nor that Facebook users never see a penny of that. It's also that as a result of the shift in distribution mechanism the creative classes are under unbearable pressure. When a guitarist of the stature of David Torn gets 8USD from Pandora for 100,000s of plays of one of his tunes, making a living as a guitarist becomes nearly untenable. This is not a one time occurrence. Read Zoe Keating's article about how YouTube is treating her as they roll out their new music service. This same scenario is playing out in journalism, photography, etc. The creative classes are under siege. The attention economy provides a direct way for creative people to earn compensation for their creative outpouring. Facebook simply doesn't have such a model and it would directly undermine their existing revenue model.

5) Synereo provides users with a new level of control over what shows up in their feed. At this level of detail we can say that Synereo identifies two basic types of entities, ports and filters. A port can either be a source of information (like one of your friends from whom you receive posts, or an rss feed to which you are subscribed, or a mailing list to which you are subscribed, or a device on the Internet of things, or ...) or a sink for information (like one of your friends, or an rss feed to which you publish, or ... ). A filter is something applied to the stream of information flowing into or out of a port to further refine what goes into or comes out of the stream. Notice that when you combine a filter with a port you get a new port. This kind of 'algebra' of information flow components gives users a new toolkit for controlling what they see in their feed. It also has fairly natural UI metaphors that have a long history of success as a means of conveying the presentation of streams of events. This leads is to the last point.

6) Synereo provides new UI experiences that allow for users to see more about the dynamics of information flow. As a simple example, Facebook organizes the posts of all your friends into a single interleaved stream. If you were to uninterleave this into several streams that were synced by timestamp you would see how your friends' posts were related to each other in time. Picture in your mind's eye 5 timelines running left to right. One timeline for one of 5 friends. Each timeline provides an iconic representation of that friend's posts. When posts stack up in a near vertical line it means that your friends are posting at the same time. This gives you a way to see into the temporal dynamics of the communications of your group of friends. The reason i chose 5 is because there are 5 lines in a single musical staff. Just as the vertical stacking of notes on a staff represents play these notes at the same time, i.e. play a chord, chords in the 'score' representation of timelines allow for people to see how peoples actions are related in time. Any 6yr old can learn to read the temporal dynamics off of a single staff. A good conductor can read and hear in her mind 7 staves. So, the surface of inborn information processing capacity of the human mind for processing temporal dynamics is not even being scratched, let alone the depths plumbed. Facebook does not actually have a vested interest in users being able to utilize their capacity. They have a vested interest in directing attention in a way that maximizes return for their investors.

by leithaus (noreply@blogger.com) at February 06, 2015 04:18 PM

February 05, 2015

Dave Winer

Why nodeStorage is a big deal

This is the story of nodeStorage.

In April last year I decided it was time for me to get my Twitter act together in my new JavaScript-based work environment. Back when I was working primarily in Frontier, and before the great breakup with Twitter and app developers, I had a pretty easy Twitter programming interface. I wanted the same thing for apps written in JavaScript in the browser.

It took a total of about two months from beginning to end to get it all working and to get a few apps built on top of it to prove that I had a complete interface.

Then I got interested in Facebook, and realized I'd have to do the same thing for it, and when I started I figured it would take about two months, the same amount of time I had spent on Twitter. Nope. It took two days. That's because Facebook had written a special library for browser-based JavaScript apps that hides all the details of connecting with Facebook from the browser.

This has value

At that point I realized that what I had in my glue for Twitter had value on its own. There was no other Node.js package that was as complete or easy. So I spent some time cleaning it up and adding S3-based storage (all apps need storage), and last month I released it as MIT-licensed open source.

That's nodeStorage.

Why it's a big deal

  1. It makes Twitter as easy to program in browser-based JavaScript as Facebook.

  2. It adds storage, which even Facebook doesn't offer.

It takes the Twitter API, which was significantly less easy than Facebook's and gives it parity, and adds an essential feature, making app development on top of the Twitter API incredibly easy and most important complete for app-building.

Now, I understand some people feel burned by Twitter, and don't want to risk building on its API, but nodeStorage takes a lot of the risk out of it. And I don't think today's Twitter is as concerned about app developers as the earlier version was.

Anyway, that's the story! If you're looking for an easy way to get started with the Twitter API and you can deploy a Node.js app, then nodeStorage is probably what you're looking for.

February 05, 2015 11:04 PM

February 04, 2015

Greg Linden

Quick links

What has caught my attention lately:
  • "Ads are often annoying ... [and] the practice of running annoying ads can cost more money than it earns" ([1] [2] [3])

  • Robot plays beer pong, but the real story is the clever bean bag robotic gripper using the "jamming phase transition of granular materials" ([1] [2] [3])

  • Good list of features a modern phone should have but does not ([1])

  • "At this point, Apple is basically an iPhone company with a few other side businesses ... The iPhone accounted for ... a staggering 69 percent ... of Apple's revenue." ([1])

  • "We were not building the phone for the customer — we were building it for Jeff [Bezos]" ([1] [2])

  • "One of the biggest problems in organizations is that the meeting is a tool that is used to diffuse responsibility" ([1] [2])

  • Pew poll on how opinions of US scientists differ from the US population, and public's perceptions of scientists ([1])

  • Pair a "brash, young scientist" with a "wiser, older scientist" to maximize innovation ([1] [2] [3])

  • Google Earth Pro is now free, lets you get high res stills and movies of anywhere on the planet ([1] [2])

  • People told a placebo was "expensive" had twice the improvement as measured by physical tests and brain scans ([1])

  • Blind men successfully train themselves to "see" using echolocation, and brain scans determine that they are using the otherwise unused visual centers of their brains to do so ([1] [2] [3] [4] [5])

  • Rather than modeling crowds with attraction and repulsion between agents, only avoiding anticipated collisions behaves closer to real humans ([1])

  • Xkcd comic: "I can't wait for the day when all my stupid computer knowledge is obsolete" ([1])

  • Xkcd What If: "Getting to space is easy. The problem is staying there." ([1])

by Greg Linden (noreply@blogger.com) at February 04, 2015 06:29 PM

Lambda the Ultimate

A theory of changes for higher-order languages — incrementalizing λ-calculi by static differentiation

The project Incremental λ-Calculus is just starting (compared to more mature approaches like self-adjusting computation), with a first publication last year.

A theory of changes for higher-order languages — incrementalizing λ-calculi by static differentiation
Paolo Giarusso, Yufei Cai, Tillmann Rendel, and Klaus Ostermann. 2014

If the result of an expensive computation is invalidated by a small change to the input, the old result should be updated incrementally instead of reexecuting the whole computation. We incrementalize programs through their derivative. A derivative maps changes in the program’s input directly to changes in the program’s output, without reexecuting the original program. We present a program transformation taking programs to their derivatives, which is fully static and automatic, supports first-class functions, and produces derivatives amenable to standard optimization.

We prove the program transformation correct in Agda for a family of simply-typed λ-calculi, parameterized by base types and primitives. A precise interface specifies what is required to incrementalize the chosen primitives.

We investigate performance by a case study: We implement in Scala the program transformation, a plugin and improve performance of a nontrivial program by orders of magnitude.

I like the nice dependent types: a key idea of this work is that the "diffs" possible from a value v do not live in some common type diff(T), but rather in a value-dependent type diff(v). Intuitively, the empty list and a non-empty list have fairly different types of possible changes. This makes change-merging and change-producing operations total, and allow to give them a nice operational theory. Good design, through types.

(The program transformation seems related to the program-level parametricity transformation. Parametricity abstract over equality justifications, differentiation on small differences.)

February 04, 2015 10:00 AM

February 03, 2015

Dave Winer

I have time in SF today

There wasn't much response to this post, so I won't be doing the office hours this afternoon in the city. Thanks to those who did respond. I'm always happy to look at products created by people who read this site, so please feel free to send me links via email. Thanks!

I'm thinking of holding informal "office hours" at a coffee place south of market this afternoon. If you have an interest, we could talk about your development project (esp if it's JavaScript) or talk about open formats and protocols, or my various projects, or other tech stuff. Thinking mainly of Scripting News readers. And please no trolls. If you have an interest, send me an email -- dave.winer@gmail.com.

February 03, 2015 04:54 PM

February 02, 2015

Blaine Buxton

Why Smalltalk is the productivity king

I've been thinking about why I'm so much more productive in Smalltalk than any other language. The reason is because I'm curious if you could bring some of it to other languages. So, what makes Smalltalk so special?

  • Incremental compilation. There is no cognitive drift. Compilation happens at the method level when one saves the code. It's automatically linked in and can be executed. Smalltalkers enjoy programming in the debugger and changing code in a running program. The concept of having to restart an entire application is foreign to a Smalltalker. The application is always alive and running. In other languages, you code while the application is not running. Programming a live application is an amazing experience. I'm shocked that it's hard to find a language that supports it. Java running the OSGi framework is the only example I can think of. But, one still has to compile a bundle (which is larger than a method).
  • Stored application state. Smalltalkers call it the image. At any point in time, you can save the entire state of the application even with a debugger open. I've saved my image at the end of the day so that I could be at that exact moment the next morning. I've also used it to share a problem that I'm having with another developer so they can see the exact state. It takes less than a second to bring up an image. It has the current running state and compiled code. One never spends time waiting for compiles or applications to start up.
  • Self contained. All of the source code is accessible inside the image. One can change any of it at any time. Every tool is written in Smalltalk. If one doesn't like how something works, one can change it. If one wants to add a feature, one is empowered to. Since everything is open, one can even change how the language works. It's programming without constraints. The original refactoring tools were written in Smalltalk.
  • Freedom from files. Allows Smalltalk to store the code in its own database. The shackles of the file system makes compilation and version control trickier. Smalltalk can incrementally index code to make searches quick and efficient. The structure of the code is not forced to fit into the file system mold.
Now, the question one has to ask is why don't we have these features in languages that we use now? Personally, I would love to be able to keep my application running and change the code as it runs. I just want to end productivity lost to application restarts and compilations.

by Blaine Buxton (noreply@blogger.com) at February 02, 2015 09:21 PM

Dave Winer

To Seahawks fans re 12th Man

Seahawks fans -- remember the 12th man concept?

That means there is no "they."

Maybe it was a dumb call. It's a test. Do you own it or complain like a mere fan. If you're on board, there's always next year.

Spoken as a Mets/Knicks fan, with no illusions about what that means.

February 02, 2015 03:20 PM

Tim Ferriss

TF-StitcherButton

IMG_7091

In this episode, I interview the one and only Arnold Schwarzenegger… at his kitchen table.

First off, he wants to invite you to LA to blow sh*t up with him in person. Seriously. Here’s how.

In our conversation, we dig into lessons learned, routines, favorite books, and much more, including many stories I’ve never heard anywhere else.  I’m also giving away amazing goodies for this episode, so be sure to read this entire post.

As a starting point, we cover:

  • The Art of Psychological Warfare, and How Arnold Uses It to Win
  • How Twins Became His Most Lucrative Movie (?!?)
  • Mailing Cow Balls to Politicians
  • How Arnold Made Millions — Fresh Off The Boat — BEFORE His Acting Career Took Off
  • How Arnold Used Meditation For One Year To Reset His Brain
  • And Much More…

TF-ItunesButton

TF-StitcherButton


I WANT TO GIVE YOU GOODIES:

1) A signed copy of Arnold’s autobiography, Total Recall, personalized for you by Arnold himself.

2) A roundtrip ticket anywhere in the world Continental flies, $1,000 USD cold hard cash, or a long dinner with me in SF (and a flight from anywhere in the domestic US).  Pick one of the three.

To get both 1 and 2, all you have to do is this:

1) Promote the hell out of this episode this week, driving clicks to the iTunes page (ideal) for my podcast, this direct streaming link, or this blog post. If helpful, the shorter link fourhourworkweek.com/arnold forwards to this blog post.

2) Leave a comment on this post telling me what you did (including anything quantifiable), no later than this Friday, Feb 6, at 6pm PT. Comments must be submitted by 6pm PT. It’s OK if they’re in moderation and don’t appear live before 6pm. Note: You must include #arnoldpod at the top of your comment to be considered! 

3) Within 7 days hence, I and my panel of magic elves will select the winner: he or she who describes in their comment how they drove the most downloads/listens.

4) That’s it! Remember: Deadline is 6pm PT this Friday, Feb 6.  No extensions.

5) Of course, void where prohibited, no purchase required, you must be over 21, no minotaurs, etc.

###

This episode is sponsored by OnnitI have used Onnit products for years. If you look in my kitchen or in my garage you will find Alpha BRAIN, chewable melatonin (for resetting my clock while traveling), kettlebells, maces, battle ropes, and steel clubs. It sounds like a torture chamber, and it basically is. A torture chamber for self-improvement! Ah, the lovely pain. To see a list of my favorite pills, potions, and heavy tools, click here.

This podcast is also brought to you by 99Designs, the world’s largest marketplace of graphic designers. Did you know I used 99Designs to rapid prototype the cover for The 4-Hour Body? Here are some of the impressive results.  Click this link and get a free $99 upgrade.  Give it a test run..

Scroll below for links and show notes…

Enjoy!

Selected Links from the Episode

Sample People Mentioned

 

by Ian Robinson at February 02, 2015 10:20 AM

February 01, 2015

Giles Bowkett

Why Panda Strike Wrote the Fastest JSON Schema Validator for Node.js

Update: Another project is even faster!

After reading this post, you will know:

Because this is a very long blog post, I've followed the GitHub README convention of making every header a link.

Those who do not understand HTTP are doomed to re-implement it on top of itself


Not everybody understands HTTP correctly. For instance, consider the /chunked_upload endpoint in the Dropbox API:

Uploads large files to Dropbox in multiple chunks. Also has the ability to resume if the upload is interrupted. This allows for uploads larger than the /files_put maximum of 150 MB.

Since this is an alternative to /files_put, you might wonder what the deal is with /files_put.

Uploads a file using PUT semantics. Note that this call goes to api-content.dropbox.com instead of api.dropbox.com.

The preferred HTTP method for this call is PUT. For compatibility with browser environments, the POST HTTP method is also recognized.

To be fair to Dropbox, "for compatibility with browser environments" refers to the fact that, of the people I previously mentioned - the ones who do not understand HTTP - many have day jobs where they implement the three major browsers. I think "for compatibility with browser environments" also refers to the related fact that the three major browsers often implement HTTP incorrectly. Over the past 20 years, many people have noticed that their lives would be less stressful if the people who implemented the major browsers understood the standards they were implementing.

Consider HTTP Basic Auth. It's good enough for the GitHub API. Tons of people are perfectly happy to use it on the back end. But nobody uses it on the front end, because browsers built a totally unnecessary restriction into the model - namely, a hideous and unprofessional user experience. Consequently, people have been manually rebuilding their own branded, styled, and usable equivalents to Basic Auth for almost every app, ever since the Web began.

By pushing authentication towards the front end and away from an otherwise perfectly viable aspect of the fundamental protocol, browser vendors encouraged PHP developers to handle cryptographic issues, and discouraged HTTP server developers from doing so. This was perhaps not the most responsible move they could have made. Also, the total dollar value of the effort expended to re-implement HTTP Basic Auth on top of HTTP, in countless instances, over the course of twenty years, is probably an immense amount of money.

Returning to Dropbox, consider this part here again:

Uploads large files to Dropbox in multiple chunks. Also has the ability to resume if the upload is interrupted. This allows for uploads larger than the /files_put maximum of 150 MB.

Compare that to the accept-range header, from the HTTP spec:

One use case for this header is a chunked upload. Your server tells you the acceptable range of bytes to send along, your client sends the appropriate range of bytes, and you thereby chunk your upload.

Dropbox decided to take exactly this approach, with the caveat that the Dropbox API communicates an acceptable range of bytes using a JSON payload instead of an HTTP header.

Typical usage:

  • Send a PUT request to /chunked_upload with the first chunk of the file without setting upload_id, and receive an upload_id in return.
  • Repeatedly PUT subsequent chunks using the upload_id to identify the upload in progress and an offset representing the number of bytes transferred so far.
  • After each chunk has been uploaded, the server returns a new offset representing the total amount transferred.
  • After the last chunk, POST to /commit_chunked_upload to complete the upload.

Google Maps does something similar with its API. It differs from the Dropbox approach in that, instead of an endpoint, it uses a CGI query parameter. But Google Maps went a little further than Dropbox here. They decided that ignoring a perfectly good HTTP header was not good enough, and instead went so far as to invent new HTTP headers which serve the exact same purpose:

To initiate a resumable upload, make a POST or PUT request to the method's /upload URI, including an uploadType=resumable parameter:

POST https://www.googleapis.com/upload/mapsengine/v1/rasters/{asset_id}/files
?filename={filename}
&uploadType=resumable

For this initiating request, the body is either empty or it contains the metadata only; you'll transfer the actual contents of the file you want to upload in subsequent requests.

Use the following HTTP headers with the initial request:

  • X-Upload-Content-Type. Set to the media MIME type of the upload data to be transferred in subsequent requests.
  • X-Upload-Content-Length. Set to the number of bytes of upload data to be transferred in subsequent requests. If the length is unknown at the time of this request, you can omit this header.
  • Content-Length. Set to the number of bytes provided in the body of this initial request. Not required if you are using chunked transfer encoding.

It's possible that the engineers at Google and Dropbox know some limitation of Accept-Ranges that I don't. They're great companies, of course. But it's also possible they just don't know what they're doing, and that's my assumption here. If you've ever been to Silicon Valley and met some of these people, you're probably already assuming the same thing. Hiring great engineers is very difficult, even for companies like Google and Dropbox. Netflix holds terrific scaling challenges and its engineers are still only human.


Anyway, combine this with the decades-running example of HTTP Basic Auth, and it becomes painfully obvious that those who do not understand HTTP are doomed to re-implement it on top of itself.

If you're a developer who understands HTTP, you've probably seen many similar examples already. If not, trust me: they're out there. And this widespread propagation of HTTP-illiterate APIs imposes unnecessary and expensive problems in scaling, maintenance, and technical debt.

One example: you should version with the Accept header, not your URI, because:

Tying your clients into a pre-set understanding of URIs tightly couples the client implementation to the server; in practice, this makes your interface fragile, because any change can inadvertently break things, and people tend to like to change URIs over time.

But this opens up some broader questions about APIs, so let's take a step back for a second.

APIs and JSON Schema


If you're working on a modern web app, with the usual message queues and microservices, you're working on a distributed system.

Not long ago, a company had a bug in their app, which was a modern web app, with the usual message queues and microservices. In other words, they had a bug in their distributed system. Attempts to debug the issue turned into meetings to figure out how to debug the issue. The meetings grew bigger and bigger, bringing in more and more developers, until somebody finally discovered that one microservice was passing invalid data to another microservice.

So a Panda Strike developer told this company about JSON Schema.

Distributed systems often use schemas to prevent small bugs in data transmission from metastasizing into paralyzing mysteries or dangerous security failures. The Rails and Rubygems YAML bugs of 2013 provide a particularly alarming example of how badly things can go wrong when a distributed system's input is not type-safe. Rails used an attr_accessible/attr_protected system for most of its existence - at least as early as 2005 - but switched to its new "strong parameters" system with the release of Rails 4 in 2013.

Here's some "strong parameters" code:

This line in particular stands out as an odd choice for a line of code in a controller:

params.require(:email).permit(:first_name, :last_name, :shoe_size)

With verbs like require and permit, this is basically a half-assed, bolted-on implementation of a schema. It's a document, written in Ruby for some insane reason, located in a controller file for some even more insane reason, which articulates what data's required, and what data's permitted. That's a schema. attr_accessible and attr_protected served a similar purpose more crudely - the one defining a whitelist, the other a blacklist.

In Rails 3, you defined your schema with attr_accessible, which lived in the model. In Rails 4, you use "strong parameters," which go in the controller. (In fact, I believe most Rails developers today define their schema in Ruby twice - via "strong parameters," for input, and via ActiveModel::Serializer, for output.) When you see people struggling to figure out where to shoehorn some functionality into their system, it usually means they haven't figured out what that functionality is.

But we know it's a schema. So we can make more educated decisions about where to put it. In my opinion, whether you're using Rails or any other technology, you should solve this problem by providing a schema for your API, using the JSON Schema standard. Don't put schema-based input-filtering in your controller or your model, because data which fails to conform to the schema should never even reach application code in the first place.

There's a good reason that schemas have been part of distributed systems for decades. A schema formalizes your API, making life much easier for your API consumers - which realistically includes not only all your client developers, but also you yourself, and all your company's developers as well.

JSON Schema is great for this. JSON Schema provides a thorough and extensible vocabulary for defining the data your API can use. With it, any developer can very easily determine if their data's legit, without first swamping your servers in useless requests. JSON Schema's on draft 4, and draft 5 is being discussed. From draft 3 onwards, there's an automated test suite which anyone can use to validate their validators; JSON Schema is in fact itself a JSON schema which complies with JSON Schema.

Here's a trivial JSON Schema schema in CSON, which is just CoffeeScript JSON:

One really astonishing benefit of JSON Schema is that it makes it possible to create libraries which auto-generate API clients from JSON Schema definitions. Panda Strike has one such library, called Patchboard, which we've had terrific results with, and which I hope to blog about in future. Heroku also has a similar technology, written in Ruby, although their documentation contains a funny error:

We’ve also seen interest in this toolchain from API developers outside of Heroku, for example [reference customer]. We’d love to see more external adoption of this toolkit and welcome discussion and feedback about it.

That's an actual quote. Typos aside, JSON Schema makes life easier for ops at scale, both in Panda Strike's experience, and apparently in Heroku's experience as well.

JSON Schema vs Joi's proprietary format


However, although JSON Schema's got an active developer and user community, Walmart Labs has also had significant results with their Joi project, which leverages the benefits of an API schema, but defines that schema in JavaScript rather than JSON. Here's an example:

As part of the Hapi framework, Joi apparently powered 2013 Black Friday traffic for Walmart very successfully.

Hapi was able to handle all of Walmart mobile Black Friday traffic with about 10 CPU cores and 28Gb RAM (of course we used more but they were sitting idle at 0.75% load most of the time). This is mind blowing traffic going through VERY little resources.

(The Joi developers haven't explicitly stated what year this was, but my guess is 2013, because this quote was available before Black Friday this past year. Likewise, we don't know exactly how many requests they're talking about here, but it's pretty reasonable to assume "mind-blowing traffic" means a lot of traffic. And it's pretty reasonable to assume they were happy with Joi on Black Friday 2014 as well.)

I love this success story because it validates the general strategy of schema validation with APIs. But at the same time, Joi's developers aren't fans of JSON Schema.

On json-schema - we don't like it. It is hard to read, write, and maintain. It also doesn't support some of the relationships joi supports. We have no intention of supporting it. However, hapi will soon allow you to use whatever you want.

At Panda Strike, we haven't really had these problems, and JSON Schema has a couple advantages that Joi's custom format lacks.

The most important advantage: multi-language support. JSON's universality is quickly making it the default data language for HTTP, which is the default data transport for more or less everything in the world built after 1995. Defining your API schema in JSON means you can consume and validate it in any language you wish.

It might even be fair to leave off the "JS" and call it ON Schema, because in practice, JSON Schema validators will often allow you to pass them an object in their native languages. Here's a Ruby example:

This was not JSON; this was Ruby. In this example, you still have to use strings, but it'd be easy to circumvent that, in the classic Rails way, with the ActiveSupport library. Similar Python examples also exist. If you've built something with Python and JSON Schema, and you decide to rebuild in Ruby, you won't have to port the schema.

Crazy example, but it's equally true for Clojure, Go, or Node.js. And it's not at all difficult to imagine that a company might port services from Python or Ruby to Clojure, Go, or Node, especially if speed's essential for those services. At a certain point in a project's lifecycle, it's actually quite common to isolate some very specific piece of your system for a performance boost, and to rewrite some important slice of your app as a microservice, with a new focus on speed and scalability. Because of this, it makes a lot of sense to decouple an API's schema from the implementation language for any particular service which uses the API.

JSON Schema's universality makes it portable in a way that Joi's pure JavaScript schemas cannot achieve. (This is also true for the half-implemented pure-Ruby schemas buried inside Rails's "strong parameters" system.)

Another fun use case for JSON Schema: describing valid config files for any service written in any language. This might be annoying for those of you who prefer writing your config files in Ruby, or Clojure, or whatever language you prefer, but it has a lot of practical utility. The most obvious argument for JSON Schema is that it's a standard, which has a lot of inherent benefits, but the free bonus prize is that it's built on top of an essentially universal data description language.

And one final quibble with Joi: it throws some random, miscellaneous text munging into the mix, which doesn't make perfect sense as part of a schema validation and definition library.

JSCK: Fast as fuck


If it seems like I'm picking on Joi, there's a reason. Panda Strike's written a very fast JSON Schema validator, and in terms of performance, Joi is its only serious competitor.

Discussing a blog post on cosmicrealms.com which benchmarked JSON Schema validators and found Joi to be too slow, a member of the Joi community said this:

Joi is actually a lot faster, from what I can tell, than any json schema validator. I question the above blog's benchmark and wonder if they were creating the joi schema as part of the iteration (which would be slower than creating it as setup).

The benchmark in question did make exactly that mistake in the case of JSV, one of the earliest JSON Schema validators for Node.js. I know this because Panda Strike built another of the very earliest JSON Schema validators for Node. It's called JSCK, and we've been benchmarking JSCK against every other Node.js JSON Schema validator we can find. Not only is it easily the fastest option available, in some cases it is faster by multiple orders of magnitude.

We initially thought that JSV was one of these cases, but we double-checked to be sure, and it turns out that the JSV README encourages the mistake of re-creating the schema on every iteration, as opposed to only during setup. We had thought JSCK was about 10,000 times faster than JSV, but when we corrected for this, we found that JSCK was only about 100 times faster.

(I filed a pull request to make the JSV README clearer, to prevent similar misunderstandings, but the project appears to be abandoned.)

So, indeed, the Cosmic Realms benchmarks do under-represent JSV's speed in this way, which means it's possible they under-represent Joi's speed in the same way also. I'm not actually sure. I hope to investigate in future, and I go into some relevant numbers further down in this blog post.

However, this statement seems very unlikely to me:

Joi is actually a lot faster, from what I can tell, than any json schema validator.

It is not impossible that Joi might turn out to be a few fractions of a millisecond faster than JSCK, under certain conditions, but Joi is almost definitely not "a lot faster" than JSCK.

Let's look at this in more detail.

JSCK benchmarks


The Cosmic Realms benchmarks use a trivial example schema; our benchmarks for JSCK use a trivial schema too, but we also use a more medium-complexity schema, and a very complex schema with nesting and other subtleties. We used a multi-schema benchmarking strategy to make the data more meaningful.

I'm going to show you these benchmarks, but first, here's the short version: JSCK is the fastest JSON Schema validator for Node.js - for both draft 3 and draft 4 of the spec, and for all three levels of complexity that I just mentioned.

Here's the long version. It's a matrix of libraries and schemas. We present the maximum, minimum, and median number of validations per second, for each library, against each schema, with the schemas organized by their complexity and JSON Schema draft. We also calculate the relative speed of each library, which basically means how many times slower than JSCK a given library is. For instance, in the chart below, json-gate is 3.4x to 3.9x slower than JSCK.

The jayschema results are an outlier, but JSCK is basically faster than anything.

When Panda Strike first created JSCK, few other JSON Schema valdiation libraries existed for Node.js. Since there are so many new alternatives, it's pretty exciting to see that JSCK remains the fastest option.

However, if you're also considering Joi, my best guess is that, for trivial schemas, Joi is about the same speed as JSCK, which is obviously pretty damn fast. I can't currently say anything about its relative performance on complex schemas, but I can say that much.

Here's why. There's a project called enjoi which automatically converts trivial JSON Schemas to Joi's format. It ships with benchmarks against tv4. The benchmarks run a trivial schema, and this is how they look on my box:

tv4 vs joi benchmark:

tv4: 22732 operations/second. (0.0439918ms)
joi: 48115 operations/second. (0.0207834ms)

For a trivial draft 4 schema, Joi is more than twice as fast as tv4. Our benchmarks show that for trivial draft 4 schemas, JSCK is also more than twice as fast as tv4. So, until I've done further investigation, I'm happy to say they look to be roughly the same speed.

However, JSCK's speed advantage over tv4 increases to 5x with a more complex schema. As far as I can tell, nobody's done the work to translate a complex JSON Schema into Joi's format and benchmark the results. So there's no conclusive answer yet for the question of how Joi's speed holds up against greater complexity.

Also, of course, these specific results are dependent on the implementation details of enjoi's schema translation, and if you make any comparison between Joi and a JSON Schema validator, you should remember there's an apples-to-oranges factor.

Nonetheless, JSCK is very easily the fastest JSON Schema validator for Node.js, and although Joi might be able to keep up in terms of performance, a) it might not, and b) either way, its format locks you into a specific language, whereas JSON Schema gives you wide portability and an extraordinary diversity of options.

We are therefore very proud to recommend that you use JSCK if you want fast JSON Schema validation in Node.js.

I'm doing a presentation about JSCK at ForwardJS in early February. Check it out if you're in San Francisco.

by Giles Bowkett (noreply@blogger.com) at February 01, 2015 04:33 PM

January 30, 2015

Giles Bowkett

ForwardJS Next Week: JSCK Talk, Free Tickets

I'll be giving a talk on JSCK at ForwardJS next week. And I have two free tickets to give away — first come, first served.

I'm looking forward to this conf a lot, mostly because there's a class on functional programming in JavaScript. I know there are skeptics, but I recently read Fogus's book about that and it was pretty great.

by Giles Bowkett (noreply@blogger.com) at January 30, 2015 10:50 AM

January 29, 2015

Ian Bicking

Encouraging Positive Engagement

In my last post on management I talked about a Manager Tools series, and summarized it as:

The message in these podcasts is: it is your responsibility as a manager to support the company’s decisions. Not just to execute on them, but to support them, to communicate that support, and if you disagree then you must hide that disagreement in the service of the company. You can disagree up — though even that is fraught with danger — but you can’t disagree down. You must hold yourself apart from your team, putting a wall between you and your team. To your team you are the company, not a peer.

I’m not endorsing that approach, but I’m also not sure they are wrong. In the comments on the post and on Hacker News that idea got a lot of pushback, including from people who followed up and listened to the original podcasts. Listening to those podcasts made me feel very uncomfortable, and I wrote that post immediately after listening to the podcasts.

I shared a particular instance where I felt I had to apply this principle. But thinking about this more, and talking about it with my reports, I have a better feeling for how I want to approach this question.

I think the “always be honest” approach that was widely advocated is terribly simplistic. Honesty doesn’t mean saying “hi, how are you doing? That shirt is incredibly ugly.” You might have thoughts, but it isn’t dishonest to hold your tongue. Each of us already consider what we say and how we say it. As a manager, and in a position of leadership, your words have greater impact. It is wise to put in a bit more consideration, especially around certain topics.

That said, I don’t think I need to agree with every choice that the company makes. I don’t have to offer up disagreement, but I do get asked, and should answer honestly. It is my responsibility to help my reports engage positively with the larger institution. That’s a constant: even if everything is totally fucked up, it’s still the right thing to engage positively with circumstances. Otherwise you should leave. But that’s ultimatum-talk, most of the impact is in the margins: engaging more positively in all your actions.

In my position I can sabotage this engagement. What I might see as simple “disagreement” has the potential to undermine whatever good may come out of a decision, and so I have to be careful. For instance, it’s easy in disagreement to telegraph (even unintentionally) a belief that a policy should be ignored, or that feet-dragging is politically advantageous for the team, or that the team should sandbag.

So what if something happens that I really disagree with? Until I’ve thought it through I should probably keep my mouth shut. This requires a degree of humility (first, heal thyself). I have to figure out how I can engage positively with these new circumstances. This might be a lonely exercise, sandwiched above by a decision I disagree with and below by reports I must withhold myself from. But I have to work through this – people treat opinions as though they are immutable, as though it is dishonest or even duplicitous if you do not stick with your first reaction. There is an arrogance in this (of course in management you also have to cultivate sufficient arrogance to tell people what to do). And so it is a real challenge to find the humility to genuinely change your mind about something, or change your perspective. But I don’t think a manager has to completely align themselves with company decisions, they don’t have to paste a smile on and say “everything is great!” The manager has to do good work in a new situation, and that means helping your reports do good work. Pasted on smiles are superfluous.

by Ian Bicking at January 29, 2015 06:00 AM

January 28, 2015

Reinventing Business

Slide-Deck Summary of "Reinventing Organizations"

Here's a very nice overview of the ideas from Frederic Laloux's Reinventing Organizations. I'm about one third of the way through the book and am savoring it. It's been rewiring my brain, and now it's even harder to watch traditional industrial-age organizations in action.

by noreply@blogger.com (Bruce Eckel) at January 28, 2015 05:42 PM

January 27, 2015

Giles Bowkett

Superhero Comics About Modern Tech

There are two great superhero comics which explore social media and NSA surveillance in interesting ways.

I really think you should read these comics, if you're a programmer. Programming gives you incredible power to shape the way the world is changing, as code takes over nearly everything. But both the culture around programming, and the educations which typically shape programmers' perspectives, emphasize technical details at the expense of subjects like ethics, history, anthropology, and psychology, leading to incredibly obvious and idiotic mistakes with terrible consequences. With great power comes great responsibility, but at Google and Facebook, with great free snacks come great opportunities for utterly unnecessary douchebaggery.


A lot of people in the music industry talk about Google as evil. I don’t think they are evil. I think they, like other tech companies, are just idealistic in a way that works best for them... The people who work at Google, Facebook, etc can’t imagine how everything they make is not, like, totally awesome. If it’s not awesome for you it’s because you just don’t understand it yet and you’ll come around. They can’t imagine scenarios outside their reality and that is how they inadvertently unleash things like the algorithmic cruelty of Facebook’s yearly review (which showed me a picture I had posted after a doctor told me my husband had 6-8 weeks to live).

Fiction exists to explore issues like these, and in particular, fantastical fiction like sci-fi and superhero comics is extremely useful for exploring the impact of new technologies on a society. This is one of the major reasons fiction exists and has value, and these comics are doing an important job very effectively. (There's sci-fi to recommend here as well, but a lot of the people who were writing sci-fi about these topics seem to have almost given up.)

So here these comics are.



In 2013 and 2014, Peter Parker was dead. (Not really dead, just superhero dead.) The megalomaniac genius Otto Octavius, aka Dr. Octopus, was on the verge of dying from terminal injuries racked up during his career as a supervillian. So he tricked Peter Parker into swapping bodies with him, so that Parker died in Octavius's body and Octavius lived on inside Parker's. But in so doing, he acquired all of Parker's memories, and saw why Parker dedicated his life to being a hero. Octavius then chose to follow his example, but to do so with greater competence and intelligence, becoming the Superior Spider-Man.

The resulting comic book series was amazing. It's some of the best stuff I've ever seen in a whole lifetime reading comics.

Given that his competence and intelligence were indeed both superior, Octavius did actually do a much better job of being Spider-Man than Spider-Man himself had ever done, in some respects. (Likewise, as Peter Parker, he swiftly obtained a doctorate, launched a successful tech startup, and turned Parker's messy love life into something much simpler and healthier.) But given that Octavius was a megalomaniac asshole with no reservations about murdering people, he did a much worse job, in other respects.

In the comics, the Superior Spider-Man assassinates prominent criminals, blankets the entire city in a surveillance network comprised of creepy little eight-legged camera robots, taps into every communications network in all of New York, and uses giant robots to completely flatten a crime-ridden area, killing every inhabitant. (He also rants constantly, in hilariously overblown terms, like the verbose and condescending supervillian who he was for his entire previous lifetime.)



Along the way, Octavius meets "supervillians" who are merely pranksters -- kids who hit the mayor with a cream pie so they can tweet about it -- and he nearly kills them.



As every superhero does, of course, Peter Parker eventually comes back from the dead and saves the day. But during the course of the series' nearly two-year run, The Superior Spider-Man did an absolutely amazing job of illustrating how terrible it can be for a city to have a protector with incredible power and no ethical boundaries. Anybody who works for the NSA should read these comics before quitting their terrible careers in shame.



DC Comics, meanwhile, has rebooted Batgirl and made social media a major element of her life.



I only just started reading this series, and it's a fairly new reboot, but as far as I can tell, these new Batgirl comics are comics about social media which just happen to feature a superhero (or superheroine?) as their protagonist. She promotes her own activities on an Instagram-like site, uses it to track down criminals, faces impostors trying to leverage her fame for their own ends, and meets dates in her civilian life as Barbara Gordon through Hooq, a fictional Tinder equivalent.

The most important difference between these two series is that one ran for two years and is now over, while the other is just getting started. But here's my impression so far. Where Superior Spider-Man tackled robotics, ubiquitous surveillance, and an unethical "guardian" of law and order, Batgirl seems to be about the weird cultural changes that social media are creating.

Peter Parker's a photographer who works for a newspaper, and Clark Kent's a reporter, but this is their legacy as cultural icons created many decades ago. Nobody thinks of journalism as a logical career for a hero any more. Batgirl's a hipster in her civilian life, she beats up douchebag DJs, and I think she might work at a tech startup, but maybe she's in grad school. There's a fun contrast here; while the Superior Spider-Man's alter ego "Peter Parker," really Otto Octavius, basically represents the conflict between how Google and the NSA see themselves vs. how they look to everyone else — super genius vs. supervillain — Barbara Gordon looks a lot more like real life, or at least, the real life of people outside of that nightmarish power complex.

by Giles Bowkett (noreply@blogger.com) at January 27, 2015 06:02 PM

Ian Bicking

A Product Journal: To MVP Or Not To MVP

I’m going to try to journal the process of a new product that I’m developing in Mozilla Cloud Services. My previous post was The Tech Demo, and the first in the series is Conception.

The Minimal Viable Product

The Minimal Viable Product is a popular product development approach at Mozilla, and judging from Hacker News it is popular everywhere (but that is a wildly inaccurate way to judge common practice).

The idea is that you build the smallest thing that could be useful, and you ship it. The idea isn’t to make a great product, but to make something so you can learn in the field. A couple definitions:

The Minimum Viable Product (MVP) is a key lean startup concept popularized by Eric Ries. The basic idea is to maximize validated learning for the least amount of effort. After all, why waste effort building out a product without first testing if it’s worth it.

– from How I built my Minimum Viable Product (emphasis in original)

I like this phrase “validated learning.” Another definition:

A core component of Lean Startup methodology is the build-measure-learn feedback loop. The first step is figuring out the problem that needs to be solved and then developing a minimum viable product (MVP) to begin the process of learning as quickly as possible. Once the MVP is established, a startup can work on tuning the engine. This will involve measurement and learning and must include actionable metrics that can demonstrate cause and effect question.

– Lean Startup Methodology (emphasis added)

I don’t like this model at all: “once the MVP is established, a startup can work on tuning the engine.” You tune something that works the way you want it to, but isn’t powerful or efficient or fast enough. You’ve established almost nothing when you’ve created an MVP, no aspect of the product is validated, it would be premature to tune. But I see this antipattern happen frequently: get an MVP out quickly, often shutting down critically engaged deliberation in order to Just Get It Shipped, then use that product as the model for further incremental improvements. Just Get It Shipped is okay, incrementally improving products is okay, but together they are boring and uncreative.

There’s another broad discussion to be had another time about how to enable positive and constructive critical engagement around a project. It’s not easy, but that’s where learning happens, and the purpose of the MVP is to learn, not to produce. In contrast I find myself impressed by the shear willfulness of the Halflife development process which apparently involved months of six hour design meetings, four days a week, producing large and detailed design documents. Maybe I’m impressed because it sounds so exhausting, a feat of endurance. And perhaps it implies that waterfall can work if you invest in it properly.

Plan plan plan

I have a certain respect for this development pattern that Dijkstra describes:

Q: In practice it often appears that pressures of production reward clever programming over good programming: how are we progressing in making the case that good programming is also cost effective?

A: Well, it has been said over and over again that the tremendous cost of programming is caused by the fact that it is done by cheap labor, which makes it very expensive, and secondly that people rush into coding. One of the things people learn in colleges nowadays is to think first; that makes the development more cost effective. I know of at least one software house in France, and there may be more because this story is already a number of years old, where it is a firm rule of the house, that for whatever software they are committed to deliver, coding is not allowed to start before seventy percent of the scheduled time has elapsed. So if after nine months a project team reports to their boss that they want to start coding, he will ask: “Are you sure there is nothing else to do?” If they say yes, they will be told that the product will ship in three months. That company is highly successful.

– from Interview Prof. Dr. Edsger W. Dijkstra, Austin, 04–03–1985

Or, a warning from a page full of these kind of quotes: “Weeks of programming can save you hours of planning.” The planning process Dijkstra describes is intriguing, it says something like: if you spend two weeks making a plan for how you’ll complete a project in two weeks then it is an appropriate investment to spend another week of planning to save half a week of programming. Or, if you spend a month planning for a month of programming, then you haven’t invested enough in planning to justify that programming work – to ensure the quality, to plan the order of approach, to understand the pieces that fit together, to ensure the foundation is correct, ensure the staffing is appropriate, and so on.

I believe “Waterfall Design” gets much of its negative connotation from a lack of good design. A Waterfall process requires the design to be very very good. With Waterfall the design is too important to leave it to the experts, to let the architect arrange technical components, the program manager to arrange schedules, the database architect to design the storage, and so on. It’s anti-collaborative, disengaged. It relies on intuition and common sense, and those are not powerful enough. I’ll quote Dijkstra again:

The usual way in which we plan today for tomorrow is in yesterday’s vocabulary. We do so, because we try to get away with the concepts we are familiar with and that have acquired their meanings in our past experience. Of course, the words and the concepts don’t quite fit because our future differs from our past, but then we stretch them a little bit. Linguists are quite familiar with the phenomenon that the meanings of words evolve over time, but also know that this is a slow and gradual process.

It is the most common way of trying to cope with novelty: by means of metaphors and analogies we try to link the new to the old, the novel to the familiar. Under sufficiently slow and gradual change, it works reasonably well; in the case of a sharp discontinuity, however, the method breaks down: though we may glorify it with the name “common sense”, our past experience is no longer relevant, the analogies become too shallow, and the metaphors become more misleading than illuminating. This is the situation that is characteristic for the “radical” novelty.

Coping with radical novelty requires an orthogonal method. One must consider one’s own past, the experiences collected, and the habits formed in it as an unfortunate accident of history, and one has to approach the radical novelty with a blank mind, consciously refusing to try to link it with what is already familiar, because the familiar is hopelessly inadequate. One has, with initially a kind of split personality, to come to grips with a radical novelty as a dissociated topic in its own right. Coming to grips with a radical novelty amounts to creating and learning a new foreign language that can not be translated into one’s mother tongue. (Any one who has learned quantum mechanics knows what I am talking about.) Needless to say, adjusting to radical novelties is not a very popular activity, for it requires hard work. For the same reason, the radical novelties themselves are unwelcome.

– from EWD 1036, On the cruelty of really teaching computing science

Research

All this praise of planning implies you know what you are trying to make. Unlikely!

Coding can be a form of planning. You can’t research how interactions feel without having an actual interaction to look at. You can’t figure out how feasible some techniques are without trying them. Planning without collaborative creativity is dull, planning without research is just documenting someone’s intuition.

The danger is that when you are planning with code, it feels like execution. You can plan to throw one away to put yourself in the right state of mind, but I think it is better to simply be clear and transparent about why you are writing the code you are writing. Transparent because the danger isn’t just that you confuse your coding with execution, but that anyone else is likely to confuse the two as well.

So code up a storm to learn, code up something usable so people will use it and then you can learn from that too.

My own conclusion…

I’m not making an MVP. I’m not going to make a maximum viable product either – rather, the next step in the project is not to make a viable product. The next stage is research and learning. Code is going to be part of that. Dogfooding will be part of it too, because I believe that’s important for learning. I fear thinking in terms of “MVP” would let us lose sight of the why behind this iteration – it is a dangerous abstraction during a period of product definition.

Also, if you’ve gotten this far, you’ll see I’m not creating minimal viable blog posts. Sorry about that.

by Ian Bicking at January 27, 2015 06:00 AM

January 25, 2015

ZZ85

MrDoob Approves – A Javascript CodeStyle Editor+Validator+Formatter Project

Near the close of the year 2014, I had an idea (while in the shower): write a little webpage which gives you the answer to “does mrdoob approve your code style”.

Screenshot 2015-01-13 09.28.24

Often times three.js gets decent pull requests that requires a little more formatting to match the project’s code style. Myself would have been found many times guilty not adhering to code style, but in the past when there were no guidelines on what it was, mrdoob and alteredq who would reformat the code their own.

Today we have slightly better documentation on contributing and code style but code style offences still happen pretty regularly I guess.

Therefore the idea was to simply use a browserfied build of node-jscs together with Mr.doob’s Code Style™(MDCS) preset (developed earlier in the year), and make it super accessible on a website. One thought was to buy a domain name like is-this-mrdoob-approved.com, along the style of some “questionable” domains eg.

  1. http://caniuse.com/ (html5 features in browser)
  2. http://www.isitdownrightnow.com/ (popular webservices)
  3. http://www.willitblend.com/ (almost anything)

So that’s how name and github project “mrdoobapproves” came about and I tweeted about it shortly.

But I thought there were room for improvements from this initial idea, so I created a github repository and applied some “Open open source” approach I learn’t from Mikeal Rogers.

Gero3 who is another awesome three.js contributor (who had previously contributed the mdcs preset to node-jscs) hopped on board and in just a couple of days (over the new year especially), we have merged 20 pull requests adding more awesome features like autoformatting and releasing version 1.0 – https://github.com/zz85/mrdoobapproves/releases/tag/v1.0.0

So I guess that’s the long story short. I could possible talk more about the history, but if you’re really interested you could piece things together reading https://twitter.com/mrdoob/status/463709502853103617, https://gist.github.com/zz85/e929503387cdc597b4f7, https://twitter.com/BlurSpline/status/463863644933992449/photo/1 and https://github.com/mrdoob/three.js/issues/4802. I could also go into why people would love or hate Mr.doob’s Code Style™, but for now I would say if you read too much dense code, trying out MCDS might be a fresh change for you.

There is also more to talk about the implementation of this project, but for now I’ll just say it is built with codemirror and node-jscs (which uses esprima) which are really really awesome libraries. There are also slight codemirror plugin additions and auto-formatting is based on gero’s branch of node-jscs since auto-formatting is coming to node-jscs.

For those who are interesting in js code formatting, be sure to check out

  • jsbeautifier – Almost defacto online JavaScript beautifier
  • JSNice – Statistical renaming, Type inference and Deobfuscation
  • jsfmt – another tool for renaming + reformating javascript using esprima and esformatter.

In conclusion, I think this has been a really interesting project to me and great thanks to Gero3 who has been a great help. I think this is also an example that when a topic (code styling) is usually be contentious among programmers, rather than complaining or debating too much, it is much useful to build tools to fix things and focus on things which matter.

So what’s next? Its nice to see some usage of this tool on three.js and perhaps one additional improvement is to hook up three.js with travis for code style checking.
Also if anyone’s interested to improve this tool, check out the enhancement list and feel create new issues and pull requests.

Finally, in case anyone missed the links, checkout

1. The demo http://zz85.github.io/mrdoobapproves
2. Github project https://github.com/zz85/mrdoobapproves
3. The 1.0 release https://github.com/zz85/mrdoobapproves/releases

by Zz85 at January 25, 2015 02:44 PM

Decyphering Glyph

Security as Stencil

Image Credit: Horia Varlan

On the Internet, it’s important to secure all of your communications.

There are a number of applications which purport to give you “secure chat”, “secure email”, or “secure phone calls”.

The problem with these applications is that they advertise their presence. Since “insecure chat”, “insecure email” and “insecure phone calls” all have a particular, detectable signature, an interested observer may easily detect your supposedly “secure” communication. Not only that, but the places that you go to obtain them are suspicious in their own right. In order to visit Whisper Systems, you have to be looking for “secure” communications.

This allows the adversary to use “security” technologies such as encryption as a sort of stencil, to outline and highlight the communication that they really want to be capturing. In the case of the NSA, this dumps anyone who would like to have a serious private conversation with a friend into the same bucket, from the perspective of the authorities, as a conspiracy of psychopaths trying to commit mass murder.

The Snowden documents already demonstrate that the NSA does exactly this; if you send a normal email, they will probably lose interest and ignore it after a little while, whereas if you send a “secure” email, they will store it forever and keep trying to crack it to see what you’re hiding.

If you’re running a supposedly innocuous online service or writing a supposedly harmless application, the hassle associated with setting up TLS certificates and encryption keys may seem like a pointless distraction. It isn’t.

For one thing, if you have anywhere that user-created content enters your service, you don’t know what they are going to be using it to communicate. Maybe you’re just writing an online game but users will use your game for something as personal as courtship. Can we agree that the state security services shouldn’t be involved in that?. Even if you were specifically writing an app for dating, you might not anticipate that the police will use it to show up and arrest your users so that they will be savagely beaten in jail.

The technology problems that “secure” services are working on are all important. But we can’t simply develop a good “secure” technology, consider it a niche product, and leave it at that. Those of us who are software development professionals need to build security into every product, because users expect it. Users expect it because we are, in a million implicit ways, telling them that they have it. If we put a “share with your friend!” button into a user interface, that’s a claim: we’re claiming that the data the user indicates is being shared only with their friend. Would we want to put in a button that says “share with your friend, and with us, and with the state security apparatus, and with any criminal who can break in and steal our database!”? Obviously not. So let’s stop making the “share with your friend!” button actually do that.

Those of us who understand the importance of security and are in the business of creating secure software must, therefore, take on the Sisyphean task of not only creating good security, but of competing with the insecure software on its own turf, so that people actually use it. “Slightly worse to use than your regular email program, but secure” is not good enough. (Not to mention the fact that existing security solutions are more than “slightly” worse to use). Secure stuff has to be as good as or better than its insecure competitors.

I know that this is a monumental undertaking. I have personally tried and failed to do something like this more than once. As the Rabbi Tarfon put it, though:

It is not incumbent upon you to complete the work, but neither are you at liberty to desist from it.

by Glyph at January 25, 2015 12:16 AM

January 23, 2015

Blue Sky on Mars

The Correct Floor Plan For Your Startup

There’s been a lot of discussion lately about whether particular office floor plans are detrimental to your startup’s success.

Since I moonlight as an Investigative Journalist, I decided to analyze three popular approaches that you can take:

Open Floor Plan

An “open floor plan office” derives from the Polish phrase meaning, “fuck the employees let’s shove them all in a pile and let them sort it out”. This plan, popularized by Henry Ford, became pivotal in the 1950’s to America’s continued technological advantage over the Soviet Union due to the USSR’s overwhelming surplus of walls. No matter what approach the Politburo tried, the only way they could feasibly deal with this surplus was to create fields of cubicles, leading to the infamous open floor plan gap that would dictate foreign policy for decades to come.

In an open floor plan, teams of developers tend to be regurgitated onto endless miles of tables, which are pushed together so that management can easily corral them into meetings (but not close enough such that developers unionize).

To defend themselves, the immune systems of these developers naturally evolved extensions of their earlobes into growths that we now call “headphones”. This led many developers to claim that techno music “really helped them concentrate and code”, which is a fate easier to swallow than sitting and talking about something called “360 degree feedback” in a meeting room with your manager.

The open floor plan grew in fame in recent years because, according to Peter Thiel, “everyone wanted to be like that one guy in The Social Network trailer and yell MARRRRRRK! and throw shit on Mark Zuckerberg’s desk”.

Closed Floor Plan

The “closed floor plan office” came directly from those misled years spent in those Soviet grain farms which grew walls that formed the basis of their cubicle society from 1952-1991.

Though detrimental towards Team Bonding, cubicles eventually were adopted in the American workplace by Enron executives in 1994, who sought a more potent way to demoralize its workplace through the use of emotionally-crippling loneliness and isolation. After a few months of this approach, Enron’s handlers found they didn’t even need to lock the doors to cubicles anymore; employees would naturally sit at their desk until the end of business at 10PM. (Though oft-attempted, the economical and lucrative bathroom/cubicle combination was found to be unfeasible until new laws were passed by Congress in 2008 through rider on an extension of the Patriot Act.)

Still, closed floor plans are generally frowned upon by startups primarily due to the death of a prominent FORTRAN programmer who had a heart attack in his private office while drinking his eighteenth Mountain Dew™ that day during the first dotcom bubble. His body wasn’t discovered until the second bubble.

The Remote Office

A relatively late contender, the remote office became wildly popular with the introduction of Skype calling, which let you listen in with crystal-clear quality on the exact sound of the splash made while your coworker was on the toilet discussing thin client strategies during your 9am standup (well, sit-down).

The remote office would have similar soul-crushing loneliness of the closed floor plan office with the exception that now it is no longer necessary to wear pants. With that single achievement, remote work has been scientifically proven to be accessible, approachable, and advantageous to millions of remote workers who happen to be found in the same timezone as the main office.

Luckily, there’s always one asshole who is literally across the entire world and one of you always has to wake up way fucking early in the morning to talk to them and oh just kidding it’s always going to be you because the other dude is “a little hungover and still rolling a little from this crazy paris hilton dj set last night” to do it in his morning today so could you just wake up in eight hours thanks i’d really appreciate it!

Conclusion

So, as you can see, all of these office structures have considerable benefits and drawbacks.

After studying the problem for awhile, I’ve come up with two possible solutions.

Abolish work. I’m not sure it makes sense to do work anymore, and I’m starting to think that people are happier just like, not doing it.

Use a mixture. Maybe — just maybe — the long, passionate discussion and debate about this is indicative that different people have different tastes. Some people dig open offices, some hate them, and some hate offices of all kinds (or have existing obligations at home that preclude them from going into an office every day). So it seems to me that a good approach might be to have open spaces, closed spaces, and a healthy remote environment, and let people choose. People’s tastes change over time, after all, and some days they might want isolation, some days they might not. A little flexibility goes a long way.

That said, the real best-of-all-possible-worlds option is, of course, to have one gigantic bathroom with beds in it so that people never have to leave the workplace. Traditionally it’s appropriate to buy employees an expensive watch after a few decades at the company, but I’ve always found it more prudent to give them TWO watches (one for each wrist!) during their on-boarding so that you can attach the chains directly to their new stainless steel watches so that you don’t have people accidentally leaving the company before their 30 years are up. Not sure why people haven’t adopted that practice yet.

January 23, 2015 12:00 AM

January 22, 2015

Ian Bicking

A Product Journal: The Technology Demo

I’m going to try to journal the process of a new product that I’m developing in Mozilla Cloud Services. My previous and first post was Conception.

As I finished my last post I had a product idea built around a strategy (growth through social tools and sharing) and a technology (freezing or copying the markup). But that’s not a concise product definition centered around user value. It’s not even trying. The result is a technology demo, not a product.

In my defense I’m searching for some product, I don’t know what it is, and I don’t know if it exists. I have to push this past a technology demo, but if I have to start with a technology demo then so it goes.

I’ve found a couple specific experiences that help me adapt the product:

  • I demo the product and I sense an excitement for something I didn’t expect. For example, a view that I thought was just a logical necessity might be what most appeals to someone else. To do this I have to show the tool to people, and it has to include things that I think are somewhat superfluous. And I have to be actively reading the person viewing the demo to sense their excitement.

  • Remind myself continuously of the strategy. It also helps when I remind other people, even if they don’t need reminding – it centers the discussion and my thinking around the goal. In this case there’s a lot of personal productivity use cases for the technology, and it’s easy to drift in that direction. It’s easy because the technology facilitates those use cases. And while it’s cool to make something widely useful, that won’t make this tool work the way I want as a product, or work for Mozilla. (And because I plan to build this on Mozilla’s dime it better work for Mozilla! But that’s a discussion for another post.)

  • I’ll poorly paraphrase something I’m sure someone can source in the comments: a product that people love is one that makes those people feel great about themselves. In this case, makes them feel like a journalist and not just a crank, or makes them feel like they are successfully posing as a professional, or makes them feel like what they are doing is appreciated by other people, or makes them feel like an efficient organizer. In the product design you can exult the product, try to impress people, try to attract compliments on your own prowess, but love comes when a person is impressed with themselves when they use your product. This advice helps keep me from valuing cleverness.

A common way to pull people out of technology-focused thinking is to ask “what problem does this solve?” While I appreciate this question more than I used to, it still makes me bristle. Why must everything be focused on problems? Why not opportunities! Why? An answer: problems are cases where a person has already articulated a tension and an openness to resolution. You have a customer in waiting. But must we confine ourselves to the partially formed conventional wisdom that makes something a “problem”? (One fair answer to this question is: yes. I remain open to other answers.) Maybe a more positive alternative to “what problem does this solve?” is “what does this let people do that they couldn’t do before?”

What I’m certain of is that you should constantly remember the people using your tool will care most about their interests, goals, and perspective; and will not care much about the interests, goals, or perspective of the tool maker.

So what should this tool do? If not technology, what defines it? A pithy byline might be share better. I don’t like pithy, but maybe a whole bag of pithy:

  • Improving on the URL
  • Own what you share
  • Share content, not pointers
  • Share what you see, anything you see
  • Every share is a message, make it your message
    Dammit, why do I feel compelled to noun “share”?
  • Share the context, the journey, not just the web destination
  • Own your perspective, don’t give it over to site owners
  • Know how and when people see what you share
  • Build better content, even if the publisher doesn’t
  • Trade in content, not promises for content
  • Copy/enhance/share

No… quantity doesn’t equal quantity I suppose. Another attempt:

When you share, you are a publisher. Your medium is the IM text input, or the Facebook status update, or the email composition window. It seems casual, it seems pithy, but that individual publishing is what the web is built on. I respect everyone as a publisher, every medium as worthy of improvement, and this project will respect your efforts. We will try to make a tool that can make every instance just a little bit better, simple when all you need is simple, polished if you want. We will defer your decisions because you should decide in context, not make decisions in the order that makes our work easier; we will be transparent to you, your audience, and your source; respect for the reader is part of our brand promise, and that adds to the quality of your shares; we believe content is a message, a relationship between you and your audience, and there is no universally appropriate representation; we believe there is order and structure in information, but only when that information is put to use; we believe our beliefs are always provisional and tomorrow it is our prerogative to rebelieve whatever we want most.

Who is we? Just me. A pretentiously royal we. It can’t stay that way for long though. More on that soon…

[The next post in this series is To MVP Or Not To MVP]

by Ian Bicking at January 22, 2015 06:00 AM

Blue Sky on Mars

Don't Break the Streak Maybe

The most important philosopher of our time, Jerry Seinfeld, has a major tip about productivity:

He told me to get a big wall calendar. For each day that I do my task of writing, I get to put a big red X over that day. Don’t break the chain. Skipping one day makes it easier to skip the next.

This is very simple advice that, when applied correctly, generates hundreds of millions of page views for your tech blog when you inevitably write about it in the context of technology and productivity and time management, which are themselves the Holy Trinity of Tech Journalism That Get People To Link To You. If Jerry Seinfeld had also said “don’t break the chain, and also I hate JavaScript” it would have hit The Tech Industry Traffic Sweet Spot and the internet would have collapsed.

Productizing

I’m into the idea of streaks. They work for me. But there’s also been an even more interesting conversation around the idea of productizing streaks.

When you take an inherently personal choice — choosing to accomplish a task on a regular basis — and put it into your product, you’re making a pretty large value statement. You’re saying that the task at hand is worth doing on a regular basis. This is usually true on the surface, but it still raises some interesting conversations about what that means.

It’s hip to put this concept into products these days, and for good reason: it encourages users to use your product regularly.

Strava has a couple “current week” widgets that makes you feel unhealthy when you’re not exercising:

Strava's current week streak

Day One gives you a visual graph of the last fifty days of journal entries:

Day One streak

And, of course, Uber's free-every-third-ride promotion:

Uber streak

Just kidding.

Games, of course, have honed these mechanics for years. Destiny, which I haven’t played because I have a life, much to the chagrin of my friends who should be in a twelve-step program, apparently only releases some weapons on specific days that you need to be online for. WoW has similar mechanics of getting you playing often and regularly. Anything to get people using your product over and over again.

What’s healthy, what’s not

I think there are spectrums of this behavior. GitHub, for example, has had our Contributions Graph up for a few years now. You get a colored square for every day you make write commit, a new issue, and so on.

Here’s my contributions graph for the last year:

@holman's streak

We’ve gotten some flak — and appropriately so, I think — for suggesting that a streak includes weekends. If your concept of “GitHub” falls along the lines of being work-related, then yeah, working on the weekends might be unhealthy, depending on your perspective. For others, working on open source on the weekends is a hobby, and a break from the grind, so to speak. Still others enjoy seeing grey squares on weekends as a badge of honor, as proof that they’re able to disconnect. And, of course, there’s plenty of people who couldn't possibly care whatsoever.

Like a lot of things in society, I think it usually comes down to the decisions of how you use a particular tool. And that’s tricky for a lot of reasons.

Take a look at my contributions graph above again. Having the concept of a daily “streak” was really, really helpful for me. I made a decision to build more than the previous few years (where I focused less on code and more on things like conferences and doing support), and that brought me a great deal of satisfaction and happiness. Promoting the concept of a streak helped me make my target, and I think it made me a better human last year.

Until it didn’t. At a certain point last fall, it stopped becoming a useful tool and became more of something I did because I did it because it was something I did. That, combined with a number of factors, led to some straight-up burnout. It’s a concept I’ve never really had to deal with. Shit, I like building things, why wouldn’t I get turned on from doing it often?

Know thyself

The difference, as I’m discovering while taking a few months off to pick up the pieces, is that you need to know what’s helpful to you and what’s not. And the real trick is that what’s helpful can be fluid. The problem really happens when you assume that what is helpful yesterday is the same thing that is helpful today. And they both can be different from what’s helpful tomorrow.

And humans are pretty shit at that, I think. It requires a lot of personal responsibility to ask yourself questions… and ask them frequently, at that. This goes for big things like work/life balance, but even for smaller things like gaming and exercise streaks. Are you working out too much, because that’s just what you do? Most athletes have gotten to the point of needing to step back and admit that whoa, I shouldn’t actually run this week because my knee is pretty fucked up right now, and I kind of wish I took it a little easier last time. Maybe then I wouldn’t have gotten to where I am now.

Man, that thought process is endemic to a lot of things in life.

Is productizing streaks a problem in our industry? Sure, at some level. Are some streaks healthy? Sure. Are some unhealthy? Sure. The real trick, though, is figuring out what really works for you, and not getting sucked into something if it ends up becoming less valuable to you.

Sometimes it’s great to break the streak.

January 22, 2015 12:00 AM

January 21, 2015

Blue Sky on Mars

Post-Publicity Personalities

Part of the horror of looking through your old tweets is discovering how much of an asshole you were when no one knew you existed.

Online behavior tends to change once you’ve actually been in the spotlight yourself. Here’s a flowchart I made to help clarify this:

As you can see, after achieving a PUBLICITY EVENT — think along the lines of a blog post, a product launch, public speaking, things like that — you scientifically have a 79% chance of becoming A Better Human, and unfortunately you have a 23% chance, scientifically, of becoming an even bigger dicknose than you were before. (This is actually a scientific, double-blind study. It’s double-blind because neither I nor you have access to the data.)

It’s cool if you haven’t gotten to a PUBLICITY EVENT yet, and it’s cool if you never do. After all, there’s a non-negative chance that you might end up being a jerk. But for those that haven’t, here’s how something like this goes:

You: Hey! I have something important I’d like to share with the world!
The World: Fuck you!

Of The World, the silent majority will likely appreciate what you’ve shared, the minority will be complimentary, and the fringe extreme will tell you in very specific detail why you shouldn’t have been born and also your mother is of suspicious descent as well.

Your major mistake, of course, is that you continue to stubbornly refuse to stop being a human being and thus genetically will focus exclusively on the feedback from the latter group of people. And your feeble human emotions will be all like “WAHHHHH I’M GOING TO TYPE ANGRY TWEETS ALL NIGHT! I CAN FIX THIS!” in-between watching episodes of Star Trek: The Next Generation, because Captain Jean-Luc Picard is the only one who really gets you right now.

Then the split happens

I think that’s the point where things change for some people. Everyone grows a bit of a hardened shell, but for some people it makes them jerks. For others, it makes them more accommodating. I look back at my old tweets and marvel at how abrasive they sometimes were towards apps I used every day, company decisions I didn’t understand at the time, and so on. In that case, maybe it’s just a byproduct of growing up: the more experiences you go through, the more understanding you are of flaws.

This isn’t a matter of right or wrong, in my mind. People can legitimately fuck up, or be wrong, or do something wrong. That’s always going to happen throughout the timeline of human civilization. But it’s how you carry yourself when you disagree that’s interesting to me. A lot of people, once they’ve done it themselves, understand that putting themselves out there is a scary thing. They’re able to disagree, but in a way that isn’t carpet bombing “fuck you” over tweets.

All of this is why I think it’s great for more people to write blog posts, to contribute to open source, to learn about public speaking, and so on. (If only we could filter by that metric!) The amount of respect I have for someone who just did their first five-minute lightning talk, even if it wasn’t perfect, is unbounded. It’s a hard first step to make. And even though I was being facetious about the percentage breakdown earlier, I do think that giving more people the spotlight generates a more understanding atmosphere for everyone.

So, create. Help others create. And be less of a meanie online, when you can.


If you didn’t like this internet article, please address your concerns to @holman on Twitter, and please include a link to your favorite TNG episode — thanks in advance!

January 21, 2015 12:00 AM

January 20, 2015

Blue Sky on Mars

You’ll Always Miss Being in the Basement

We were tucked away in the corner room of a damp, musky basement.

A walking cliché, really: a handful of basically kids, working on The Next Big Thing, getting paid practically nothing, in a startup on University Ave in Palo Alto. We’d see the Facebook kids across the street dress up for their toga parties and hundred million user celebrations. Listen in on VCs courting people like us at lunch. Go to the never-ending list of meetups every week.

Our shabby basement

The founder and the CEO had done it all before in the previous boom, growing the company to billions. They knew what growth actually meant for a company; they had lived it.

One day, in-between talking about our obviously bright and inevitable future and where we wanted to take the fledgling company, the founder stopped and said something that sounded like a lightning bolt to me, even back then:

One day this company will get huge. You’ll help hire your replacements, and they’ll hire still more people under them. We’ll be making tons of money, we’ll IPO, we’ll be famous, we’ll move into bigger and better offices.

As great as that’s going to be, I guarantee you one thing: you’re always going to miss being down in this basement, with these people.

Like many things he said, this was true at face value, but it just wasn’t in the cards for our particular company itself. The company was perpetually just about to close the one massive sale that would make us rich, at least according to our sales team.

But he was right. Even now, parts of me want to be back in that basement. I want to be working with that team, on a product we thought would be the next big thing, laughing with each other on the walk to lunch. Feeling like a part of something.

Most decent companies have a basement. Even if the company isn’t successful, or rich, or famous, or doesn’t even have a physical basement, they have a basement. Even if the work itself was grueling and underappreciated… a lot of pride can come from within a team’s frustration, at times.

I’m stretching the metaphor, but money can’t buy basements. My favorite memories from my five years at GitHub are simple and cheap. Having our weekly small-sided lunchtime soccer game against Atlassian. Passionately arguing the style of music required for us to finish Pull Requests. Organizing drinks with Square back when we both could fit into one tiny bar. Debating scaling of our respective sites with Heroku friends (“it’s just a fucking Git repository, that’s easy!”, we would inevitably learn). Literally every one of our office dogs.

Look: there’s plenty to worry about in a growing company. Product, trajectory, hiring. At the end of the day, a company needs to make money. That’s how you pay the employees that contribute so much to your success.

But at some point, you’re going to want to end up back in that basement.

Probably the folly of being human, maybe. You remember the good times and forget the bad. You take to glamorous reinterpretations of history. But it’s a nice comfort nonetheless. It’s why I do things, anyway.

January 20, 2015 12:00 AM

January 19, 2015

John Udell

TypeScript: Industrial-strength JavaScript

Historians who reflect on JavaScript's emergence as a dominant programming language in the 21st century may find themselves quoting Donald Rumsfeld: "You go to war with the army you have, not the army you might wish to have."

For growing numbers of programmers, JavaScript is the army we have. As we send it into the field to tackle ever more ambitious engagements both client and server side, we find ourselves battling the language itself.

[ JavaScript rules the Web -- but how do you harness its power? Find out in InfoWorld's special report, "The triumph of JavaScript." | Keep up with hot topics in app dev with InfoWorld's Strategic Developer blog. ]

JavaScript was never intended for large programs built by teams that use sophisticated tools to manage complex communication among internal modules and external libraries. Such teams have long preferred strongly typed languages like Java and C#. But those languages' virtual machines never found a  home in the browser. It was inevitable that JavaScript alternatives and enhancements would target the ubiquitous JavaScript virtual machine.

To read this article in full or to leave a comment, please click here

by Jon Udell at January 19, 2015 11:00 AM

Blue Sky on Mars

How GitHub Writes Blog Posts

You may have heard that we use GitHub to build GitHub. But our company isn't built on just code: there's strategy and HR policies and internal communications and a multitude of other documents that need to get written, too. All of which we can use GitHub to write as well.

Now, we're a bit of an oddball company. You'll have a tough time finding another organization with so many lawyers who know Git. But much of our code workflow can be easily applied elsewhere in the company, too: like writing our blog posts.

The Blog Post

The very first rule of GitHub, which has been the rule of law since day one: if you've built a new feature, you get to write the announcement post for it.

Letting a team write their own announcement is a nice nod to all the work they've put into the project. This habit is similar to the author shout-outs Apple would tuck into the About windows of early Macintosh apps. Celebrating each other's successes is a really important facet of a successful culture: shipping is contagious, after all.

Like code, though, you want the work you're putting out into the world to be approachable and to make sense. A great way to do that is to solicit outside opinions. So, also like code, our blog posts go through the pull request pipeline as well. Cut a new branch on our blog posts repository, write a new draft, and open a pull request to start the discussion (all of which you can do without leaving github.com).

Continuous Integration

The first thing that happens when we normally open a pull request is to run the tests with continuous integration. Blog posts don't need to be too different, surprisingly enough. Think of this process like a syntax linter for your words: breaking the build isn't necessarily bad, per se, but it might give you suggestions you might want to incorporate. It gives you immediate feedback without requiring a lot of additional overhead by our blog editors.

For example, here's the current small suite of tests that run on push:

  • Image alt tags. All images in your Markdown should have ’alt’ tags for accessibility purposes.
  • Image size. All images used should be less than a megabyte in size. Helps prevent accidental screenshots that might be massive and slow down the mobile experience.
  • Image promotions. All images should look good on high-density displays. Currently this just means the width should be twice the size of the width of the content on-page.
  • CDN-hosted images. All images should be hosted on our CDN, for performance, security, and longevity reasons.
  • No emoji. We use Emoji all over the place internally, but they're usually not quite the right tone in external communications.
  • No "Today, ". We've suddenly started realizing that we had gajillions of posts in the past that started out like "Today, we're pleased to announce x". This is just a silly style thing, but we decided to avoid this common intro in future posts for the sake of variety and sounding like a human.

Failing any of these tests will break the build for your specific pull request. You can still merge it and publish the post... CI in this repository is really just used as a suggestion rather than a hard-and-fast rule. The important part is that running these spot checks reduces the amount of work and time needed for someone else to manually inspect the size of your images, for example.

Merging a red pull request won't break the overall repository since these particular tests are only run on files changed in each branch:

diffable_files = `git diff --name-only --diff-filter=ACMRTUXB origin/master... | grep .md`.split("\n")

The tests themselves are just a few lines of straightforward Ruby. Simple tests shouldn't require complex code. Here’s a Gist of what we use currently, if you’re interested.

Collaborating

With your draft up and running with CI, it's time to actually fix your egregious sentence structure problems.

The first step is to get the right eyeballs on your draft. Most of the people who are interested in helping edit your drafts probably have already watched the repository and have gotten a notification about your post, so you’ll usually only have to wait a few hours to start getting meaningful copy edits suggested to you (or applied directly to your pull request, if they're non-controversial and simple changes).

It’s also helpful to mention a team in the pull request, of course. We have a bunch of writers on @github/copy that love to help out, and it’s also helpful to ping the relevant team who worked on the feature with you for their input as well:

Mentioning teams

Editing

At that point, we can rely upon some of the cool stuff we’ve already built into GitHub: prose diffs, for example, takes our Markdown and renders it to something closer to what the final result will look like on the blog. We use inline diff commenting for more specific word changes, and normal comments in a pull request for high-level “how about taking the end of this post in this direction?” thoughts.

Since it’s all versioned, we have a good history of changes from the initial draft to the final draft. Can’t tell you the number of times I’ve been happy I could go back and fish out some phrasing from earlier revisions of a draft.

Sharing

The way we write blog posts is definitely a stretch of dogfooding, probably moreso than other approaches we’ve taken at GitHub. Whether this jives with how your company operates is up to you. If not, be sure to check out some other collaborative writing tools, like Google Docs. (My favorite is @natekontny’s Draft, by the way.)

There’s two reasons why I wanted to share this workflow, though. The first is CI for prose, which I think is hilarious and actually has saved us a bit of time here and there.

The second, though, is the same reason why I dig sharing code on GitHub: it makes information accessible. I want more of the company to have access to more of itself. Marketing tends to stereotypically be sealed away in an ivory tower in most companies, and I think that’s what helps make it feel icky and inauthentic in a lot of cases.

By writing communications in an accessible manner internally, you benefit from a voice that’s hopefully more diverse, more impactful, and more genuine. And that’s the type of marketing that people appreciate.

January 19, 2015 12:00 AM

January 15, 2015

Ian Bicking

A Product Journal: Conception

I’m going to try to journal the process of a new product that I’m developing in Mozilla Cloud Services

When Labs closed and I entered management I decided not to do any programming for a while. I had a lot to learn about management, and that’s what I needed to focus on. Whether I learned what I need to I don’t know, but I have been getting a bit tired.

We went through a fairly extensive planning process towards the end of 2014. I thought it was a good process. We didn’t end up where we started, which is a good sign – often planning processes are just documenting the conventional wisdom and status quo of a group or project, but in a critically engaged process you are open to considering and reconsidering your goals and commitments.

Mozilla is undergoing some stress right now. We have a new search deal, which is good, but we’ve been seeing declining marketshare which is bad. And then when you consider that desktop browsers are themselves a decreasing share of the market it looks worse.

The first planning around this has been to decrease attrition among our existing users. Longer term much of the focus has been in increasing the quality of our product. A noble goal of course, but does it lead to growth? I suspect it can only address attrition, the people who don’t use Firefox but could won’t have an opportunity to see what we are making. If you have other growth techniques then focusing on attrition can be sufficient. Chrome for instance does significant advertising and has deals to side-load Chrome onto people’s computers. Mozilla doesn’t have the same resources for that kind of growth.

When finished up the planning process I realized, damn, all our plans were about product quality. And I liked our plan! But something was missing.

This perplexed me for a while, but I didn’t really know what to make of it. Talking with a friend about it he asked then what do you want to make? – a seemingly obvious question that no one had asked me, and somehow hearing the question coming at me was important.

Talking through ideas, I reluctantly kept coming back to sharing. It’s the most incredibly obvious growth-oriented product area, since every use of a product is a way to implore non-users to switch. But sharing is so competitive. When I first started with Mozilla we would obsess over the problem of Facebook and Twitter and silos, and then think about it until we threw our hands up in despair.

But I’ve had this trick up my sleeve that I pull out for one project after another because I think it’s a really good trick: make a static copy of the live DOM. Mostly you just iterate over the elements, get rid of scripts and stuff, do a few other clever things, use <base href> and you are done! It’s like a screenshot, but it’s also still a webpage. I’ve been trying to do something with this for a long time. This time let’s use it for sharing…?

So, the first attempt at a concept: freeze the page as though it’s a fancy screenshot, upload it somewhere with a URL, maybe add some fun features because now it’s disassociated from its original location. The resulting page won’t 404, you can save personalized or dynamic content, we could add highlighting or other features.

The big difference with past ideas I’ve encountered is that here we’re not trying to compete with how anyone shares things, this is a tool to improve what you share. That’s compatible with Facebook and Twitter and SMS and anything.

If you think pulling a technology out of your back pocket and building a product around it is like putting the cart in front of the horse, well maybe… but you have to start somewhere.

[The next post in the series is The Tech Demo]

by Ian Bicking at January 15, 2015 06:00 AM

January 14, 2015

Greg Linden

More on what to advertise when there is no commercial intent

Some of the advertising out there is getting spooky. If you look at a product at many online stores, that product will then follow you around the web.

Go to BBC News, for example, and there will be those dishes you were looking at yesterday on Overstock. Not just any dishes, the exact same dishes. Just in case you forgot about them, there they are again next time you go. And again. And again.

A few years ago, I wrote an article, "What to advertise when there is no commercial intent?". That article suggested that, on sites like news sites, we might not have immediate commercial intent, and might have to reach back into the past to find strong commercial intent. It advocated for personalized advertising that helped people discover interesting products and deals related to strong commercial intent they had earlier.

However, this did not mean that you should just show the last product I looked at. That is refinding, not personalized recommendations. Refinding is all a lot of these ads are doing. You look at a chair, ads follow you around the web showing you ads for that same chair that you already know about over and over again. That's not discovery. That's spooky and not helpful.

Personalized ads should help people discover things they don't know about related to past purchase intent. If I look at a chair, show me highly reviewed similar furniture and good coupons and big deals related in some non-obvious way to that chair and that store. Don't just show me the same chair again. I know about that chair. Show me something I don't know. Help me discover something I haven't found yet.

I understand the reason these companies are doing refinding is because it's hard to do anything better. Doing useful recommendations of related products and deals is hard. Helping people discover something new and interesting is hard. Personalized recommendations requires a lot of data, clever algorithms, and a huge amount of work. Refinding is trivially easy.

But publishers aren't doing themselves any favors by allowing these startups to get away with this kind of useless advertising. As a recent study says, "the practice of running annoying ads can cost more money than it earns." That short-term revenue bump from these spooky refinding ads is like a sugar rush, feels good while it lasts, but hurts in the long-term.

They can and should do better. Personalization, including personalized advertising, should be about helping people discover things they could not easily find on their own. Personalization should not be refinding, just showing what I found before, just exposing my history. Personalization should be helpful. Personalization should be discovery.

by Greg Linden (noreply@blogger.com) at January 14, 2015 04:11 PM

January 13, 2015

Ian Bicking

Being A Manager Is Lonely

Management is new for me. I have spent a lot of time focusing on the craft of programming, now I focus on the people who focus on the craft of programming.

During the fifteen years I’ve been participating in something I’ll call a developer community, I’ve seen a lot of progress. Sometimes we wax nostalgic with an assertion that no progress has been made… but progress has been made. We, as professionals, hobbyists, as passionate practitioners understand much more about how to test, design, package, distribute, collaborate around code. And just about how to talk about it all.

I am a firm believer that much of that progress is due to the internet. There were technological advancements, sure. And there have been books teaching practice. But that’s not enough. There were incredible ideas about programming in the 70s! But there wasn’t the infrastructure to help developers assimilate those ideas.

I put more weight on people learning than on people being taught. If the internet was just a good medium for information dispersal — a better kind of book — then that is nice, but not transformational. The internet is more than that: it’s a place to discuss, and disagree, and watch others discussing. You can be provocative, and then step back and take on a more conservative opinion – a transformation most people would be too shy to commit to print. (As if substantial portion of people have ever had the option to consider what they want to commit to print!)

I think a debate is an opportunity; seldom an opportunity to convince anyone else of what you think, but a chance to understand why you think what you do, to come to a more mature understanding, and maybe create a framework for future changes of opinion. This is why I bristle at the phrase “just choose the right tool for the job” – this phrase is an attempt to shut down the discussion about what the right tool for the job is!

This is a long digression, but I am nostalgic for how I grew into my profession. Nostalgic because now I cannot have this. I cannot discuss my job. I cannot debate the details. I cannot tell anecdotes to elucidate a point. I cannot discuss the policies I am asked to implement – the institutional instructions applied to me and through me. I can only attempt to process my experiences in isolation.

And there are good reasons for this! While this makes me sad, and though I still question if there is not another way, there are very good reasons why I cannot talk about my work. I am in a leadership position, even if only a modest and subordinate leader. There is a great deal of potential for collateral damage in what I say, especially if I talk about the things I am thinking most about. I think most about the tensions in my company, interpreting the motivations of the leadership in the company, I think about the fears I sense in my reports, the unspoken tensions about what is done, expected, aspired to. I can discuss this with the individuals involved, but they are the furthest thing from a disinterested party, and often not in a place to develop collaborative wisdom.

This is perhaps unfair. I work with very thoughtful people. Our work is grounded in a shared mission, which is a powerful thing. But it’s not enough.

Are we, as a community of managers (is there such a thing?) becoming better? Yes, some. There are management consultants and books and other material about management, and there is value in that. But it is not a discussion, it is not easy to assimilate. I don’t get to interact with a community of peers.

On the topic of learning to manage, I have listened to many episodes of Manager Tools now. I’ve learned a lot, and it’s helped me, even if they are more authoritarian than makes me comfortable. I’m writing this now after listening to a two part series: Welcome To They: Professional Subordination and Part 2.

The message in these podcasts is: it is your responsibility as a manager to support the company’s decisions. Not just to execute on them, but to support them, to communicate that support, and if you disagree then you must hide that disagreement in the service of the company. You can disagree up — though even that is fraught with danger — but you can’t disagree down. You must hold yourself apart from your team, putting a wall between you and your team. To your team you are the company, not a peer.

There is a logical consistency to the argument. There is wisdom in it. The impact of complaints filtering up is much different than the impact of complaints filtering down. In some sense as a manager you must manufacture your own consensus for decisions that you cannot affect. You are probably doing your reports a favor by positively communicating decisions, as they will be doing themselves a favor by positively engaging with those decisions. But their advice is clear: if you are asked your opinion, you must agree with the decision, maybe stoically, but you must agree, not just concede. You must speak for the company, not for yourself.

Fuck. Why would I want to sign up for this? The dictate they are giving me is literally making me sad. If it didn’t make any sense then I might feel annoyed. If I thought it represented values I did not share then I might feel angry. But I get it, and so it makes me sad.

Still, I believe in progress. I believe we can do better than we have in the past. I believe in unexplored paths, in options we aren’t ready to compare to present convention, in new ways of thinking about problems that break out of current categories. All this in management too – which is to say, new ways to form and coordinate organizations. I think those ideas are out there. But damn, I don’t know what they are, and I don’t know how to find out, because I don’t know how to talk about what we do and that’s the only place where I know how to start.

[I wrote a followup in Encouraging Positive Engagement]

by Ian Bicking at January 13, 2015 06:00 AM

January 12, 2015

Decyphering Glyph

The Glyph

As you may have seen me around the Internet, I am typically represented by an obscure symbol.

The “Glyph” Glyph

I have been asked literally hundreds of times about the meaning of that symbol, and I’ve always been cryptic in response because I felt that a full explanation is too much work. Since the symbol is something I invented, the invention is fairly intricate, and it takes some time to explain, describing it in person requires a degree of sustained narcissism that I’m not really comfortable with.

You all keep asking, though, and I really do appreciate the interest, so thanks to those of you who have asked over and over again: here it is. This is what the glyph means.

Ulterior Motive

I do have one other reason that I’m choosing to publish this particular tidbit now. Over the course of my life I have spent a lot of time imagining things, doing world-building for games that I have yet to make or books that I have yet to write. While I have published fairly voluminously at this point on technical topics (more than once on actual paper), as well as spoken about them at conferences, I haven’t made many of my fictional ideas public.

There are a variety of reasons for this (not the least of which that I have been gainfully employed to write about technology and nobody has ever wanted to do that for fiction) but I think the root cause is because I’m afraid that these ideas will be poorly received. I’m afraid that I’ll be judged according to the standards for the things that I’m now an expert professional at – software development – for something that I am a rank amateur at – writing fiction. So this problem is only going to get worse as I get better at the former and keep not getting practice at the latter by not publishing.

In other words, I’m trying to break free of my little hater.

So this represents the first – that I recall, at least – public sharing of any of the Divunal source material, since the Twisted Reality Demo Server was online 16 years ago. It’s definitely incomplete. Some of it will probably be bad; I know. I ask for your forbearance, and with it, hopefully I will publish more of it and thereby get better at it.

Backstory

I have been working on the same video game, off and on, for more or less my entire life. I am an extremely distractable person, so it hasn’t seen that much progress - at least not directly - in the last decade or so. I’m also relentlessly, almost pathologically committed to long-term execution of every dumb idea I’ve ever had, so any minute now I’m going to finish up with this event-driven networking thing and get back to the game. I’ll try to avoid spoilers, in case I’m lucky enough for any of you ever actually play this thing.

The symbol comes from early iterations of that game, right about the time that it was making the transition from Zork fan-fiction to something more original.

Literally translated from the in-game language, the symbol is simply an ideogram that means “person”, but its structure is considerably more nuanced than that simple description implies.

The world where Divunal takes place, Divuthan, was populated by a civilization that has had digital computers for tens of thousands of years, so their population had effectively co-evolved with automatic computing. They no longer had a concept of static, written language on anything like paper or books. Ubiquitous availability of programmable smart matter meant that the language itself was three dimensional and interactive. Almost any nuance of meaning which we would use body language or tone of voice to convey could be expressed in varying how letters were proportioned relative to each other, what angle they were presented at, and so on.

Literally every Divuthan person’s name is some variation of this ideogram.

So a static ideogram like the one I use would ambiguously reference a person, but additional information would be conveyed by diacritical marks consisting of other words, by the relative proportions of sizes, colors, and adornments of various parts of the symbol, indicating which person it was referencing.

However, the game itself is of the post-apocalyptic variety, albeit one of the more hopeful entries in that genre, since restoring life to the world is one of the player’s goals. One of the things that leads to the player’s entrance into the world is a catastrophe that has mysteriously caused most of the inhabitants to disappear and disabled or destroyed almost all of their technology.

Within the context of the culture that created the “glyph” symbol in the game world, it wasn’t really intended to be displayed in the form that you see it. The player first would first see such a symbol after entering a ruined, uninhabited residential structure. A symbol like this, referring to a person, would typically have adornments and modifications indicating a specific person, and it would generally be animated in some way.

The display technology used by the Divuthan civilization was all retained-mode, because I imagined that a highly advanced display technology would minimize power cost when not in use (much like e-paper avoids bleeding power by constantly updating the screen). When functioning normally, this was an irrelevant technical detail, of course; the displays displayed what you want them to display. But after a catastrophe that has disrupted network connectivity and ruined a lot of computers, this detail is important because many of the displays were still showing static snapshots of a language intended to use motion and interactivity as ways to convey information.

As the player wandered through the environment, they would find some systems that were still active, and my intent was (or “is”, I suppose, since I do still hold out hope that I’ll eventually actually make some version of this...) that the player would come to see the static, dysfunctional environment around them as melancholy, and set about restoring function to as many of these devices as possible in order to bring the environment back to life. Some of this would be represented quite concretely as time-travel puzzles later in the game actually allowed the players to mitigate aspects of the catastrophe that broke everything in the first place, thereby “resurrecting” NPCs by preventing their disappearance or death in the first place.

Coen

COEN

Coen refers to the self, the physical body, the notion of “personhood” abstractly. The minified / independent version is an ideogram for just the head, but the full version as it is presented in the “glyph” ideogram is a human body: the crook at the top is the head (facing right); the line through the middle represents the arms, and the line going down represents the legs and feet.

This is the least ambiguous and nuanced of all the symbols. The one nuance is that if used in its full form with no accompanying ideograms, it means “corpse”, since a body which can’t do anything isn’t really a person any more.

Kset

KSET

This is the trickiest ideogram to pronounce. The “ks” is meant to be voiced as a “click-hiss” noise, the “e” has a flat tone like a square wave from a synthesizer, and the “t” is very clipped. It is intended to reference the power-on sound that some of the earliest (remember: 10s of thousands of years before the main story, so it’s not like individuals have a memory of the way these things sounded) digital computers in Divuthan society made.

Honestly though if you try to do this properly it ends up sounding a lot like the English word “cassette”, which I assure you is fitting but completely unintentional.

Kset refers to algorithms and computer programs, but more generally, thought and the life of the mind.

This is a reference to the “Ee” spell power rune in the 80s Amiga game, Dungeon Master, which sadly I can’t find any online explanation of how the manual described it. It is an object poised on a sharp edge, ready to roll either left or right - in other words, a symbolic representation of a physical representation of the algorithmic concept of a decision point, or the computational concept of a branch, or a jump instruction.

Edec

EDEC

Edec refers to connectedness. It is an ideogram reflecting a social graph, with the individual below and their many connections above them. It’s the general term for “social relationship” but it’s also the general term for “network protocol”. When Divuthan kids form relationships, they often begin by configuring a specific protocol for their communication.

This is how boundary-setting within friendships and work environments (and, incidentally, flirting) works; they use meta-protocol messages to request expanded or specialized interactions for use within the context of their dedicated social-communication channels.

Unlike most of these other ideograms, its pronunciation is not etymologically derived from an onomatopoeia, but rather from an acronym identifying one of the first social-communication protocols (long since obsoleted).

Zenk

ZENK

“Zenk” is the ideogram for creation. It implies physical, concrete creations but denotes all types of creation, including intellectual products.

The ideogram represents the Divuthan version of an anvil, which, due to certain quirks of Divuthan materials science that is beyond the scope of this post, doubles for the generic idea of a “work surface”. So you could also think of it as a desk with two curved legs. This is the only ideogram which represents something still physically present in modern, pre-catastrophe Divuthan society. In fact, workshop surfaces are often stylized to look like a Zenk radical, as are work-oriented computer terminals (which are basically an iPad-like device the size of a dinner table).

The pronunciation, “Zenk”, is an onomatopoeia, most closely resembled in English by “clank”; the sound of a hammer striking an anvil.

Lesh

LESH

“Lesh” is the ideogram for communication. It refers to all kinds of communication - written words, telephony, video - but it implies persistence.

The bottom line represents a sheet of paper (or a mark on that sheet of paper), and the diagonal line represents an ink brush making a mark on that paper.

This predates the current co-evolutionary technological environment, because appropriately for a society featured in a text-based adventure game, the dominant cultural groups within this civilization developed a shared obsession for written communication and symbolic manipulation before they had access to devices which could digitally represent all of it.

All Together Now

There is an overarching philosophical concept of “person-ness” that this glyph embodies in Divuthan culture: although individuals vary, the things that make up a person are being (the body, coen), thinking (the mind, kset), belonging (the network, edec), making (tools, zenk) and communicating (paper and pen, lesh).

In summary, if a Divuthan were to see my little unadorned avatar icon next to something I have posted on twitter, or my blog, the overall impression that it would elicit would be something along the lines of:

“I’m just this guy, you know?”

And To Answer Your Second Question

No, I don’t know how it’s pronounced. It’s been 18 years or so and I’m still working that bit out.

by Glyph at January 12, 2015 09:00 AM

January 09, 2015

Reinventing Business

The Winter Tech Forum, Feb 23 - 27 in Crested Butte

Here's the announcement, or you can go directly to the site.

by noreply@blogger.com (Bruce Eckel) at January 09, 2015 07:05 PM

January 08, 2015

Giles Bowkett

Versioning Is A Nuanced Social Fiction; SemVer Is A Blunt Instrument

David Heinemeier-Hansson said something relatively lucid and wise on Twitter recently:


To his credit, he also realized that somebody else had already said it better.


Here's the nub and the gist of Jeremy Ashkenas's Gist:

SemVer tries to compress a huge amount of information — the nature of the change, the percentage of users that will be affected by the change, the severity of the change (Is it easy to fix my code? Or do I have to rewrite everything?) — into a single number. And unsurprisingly, it's impossible for that single number to contain enough meaningful information...

Ultimately, SemVer is a false promise that appeals to many developers — the promise of pain-free, don't-have-to-think-about-it, updates to dependencies. But it simply isn't true.


It's extremely worthwhile to read the whole thing.

Here's how I see version numbers: they predate Git, and Git makes version numbers pretty stupid if you take those numbers literally, because we now use hashes like 64f2a2451381c80dff1 to identify specific versions of our code bases. Strictly speaking, version numbers are fictional. If you really want to know what version you're looking at, the answer to that question is not a number at all, but a Git hash.

But we still use version numbers. We do this for the same reason that, even if we one day replace every car on the road with an error-proof robot which is only capable of perfect driving, we will still have speed limits, brake lights, and traffic signs. It's the same reason there's an urban legend that the width of the Space Shuttle ultimately derives from the width of roads in Imperial Rome: systems often outlive their original purposes.

Version numbers were originally used to identify specific versions of a code base, but that hasn't been strictly accurate since the invention of version control systems, whose history goes back at least 43 years, to 1972. As version control systems became more and more fine-grained, version numbers diverged further and further from the actual identifiers we use to index our versioning systems, and thus "version numbers" became more and more a social fiction.

Note that this is not necessarily a bad thing. Money is a social fiction, and an incredibly useful one. But SemVer is an attempt to treat the complexities of a social fiction as if they were very deterministic and controlled.

They are not.

Which means SemVer is an attempt to brutally oversimplify an inherently complex problem.

There's a lot of good commentary on these complexities. Justin Searls gave a very good presentation which goes into why these problems are inherently complex, and inherently social.



I'm not saying that I don't think SemVer's goals are important. But I do think SemVer's a clumsy replacement for nuanced versioning, and an incomplete answer for "how do we demarcate incompatibility risks in systems made up of extremely numerous libraries written by extremely numerous people?"

Because version numbers are a social fiction, entirely distinct from the "numbers" we use to actually version our software in modern version control systems, choosing new version numbers is primarily a matter of communicating with your users. Like all communication, it is inherently complex and nuanced. If it is possible at all to reliably automate the communication of nuance, the medium of communication will probably not be a trio of numbers, because the problem space simply has far more dimensions than three.

But for the same reason, I kind of think version numbers verge on ridiculous whether they're trying to color within the SemVer lines or not. There's only so much weight you can expect a social fiction to carry before it cracks at the seams and falls apart. Even the idea of a canonical repo is a little silly in my opinion.

You can see why the canonical repo is a mistake if you look at a common antipattern on GitHub: a project is abandoned, but development continues within multiple forks of the project. Which repo is now canonical? You have to examine each fork, and discover how well it keeps up with the overall, now-decentralized progress of the project. You'll often find that Fork A does a better job with updates related to one aspect of the project, while Fork B does a better job with updates related to another aspect. And it's a manual process; no GitHub view exists which will make it particularly easy for you to determine which of the still-in-progress forks are continuing ahead of the "canonical" repo.

At the very least, in a situation like this, you have to differentiate between the original repo and the canonical one. I think that much is indisputable. But I'd argue also that the basic idea of a canonical repo operates in defiance of the entire history of human language. In fact, rumor has it that GitHub itself runs on a private fork of Rails 2, which illustrates my point perfectly, by constituting a local dialect.

(Update: GitHub ran on a private fork of Rails 2 for many years, but moved to Rails 3 in September 2014. Thanks to Florian Gilcher for the details.)

I'd like to see some anthropologists and linguists research our industry, because the modern dev world, with its countless and intricately interwoven dependencies, presents some really complex and subtle problems.

by Giles Bowkett (noreply@blogger.com) at January 08, 2015 09:21 AM

January 07, 2015

Giles Bowkett

One Major Difference Between Clojure And Common Lisp

In the summer of 2013, I attended an awesome workshop called WACM (Workshop on Algorithmic Computer Music) at the University of California at Santa Cruz. Quoting from the WACM site:

Students will learn the Lisp computer programming language and create their own composition and analysis software. The instruction team will be led by professor emeritus David Cope, noted composer, author, and programmer...

The program features intensive classes on the basic techniques of algorithmic composition and algorithmic music analysis, learning and using the computer programming language Lisp. Students will learn about Markov-based rules programs, genetic algorithms, and software modeled on the Experiments in Musical Intelligence program. Music analysis software and techniques will also be covered in depth. Many compositional approaches will be discussed in detail, including rules-based techniques, data-driven models, genetic algorithms, neural networks, fuzzy logic, mathematical modeling, and sonification. Software programs such as Max, Open Music, and others will also be presented.


It was as awesome as it sounds, with some caveats; for instance, it was a lot to learn inside of two weeks. I was one of a very small number of people there with actual programming experience; most of the attendees either had music degrees or were in the process of getting them. We worked in Common Lisp, but I came with a bunch of Clojure books (in digital form) and the goal of building stuff using Overtone.

I figured I could just convert Common Lisp code almost directly into Clojure, but it didn't work. Here's a gist I posted during the workshop:


This attempt failed for a couple different reasons, as you can see if you read the comments. First, this code assumes that (if (null list1)) in Common Lisp will be equivalent to (if (nil? list1)) in Clojure, but Clojure doesn't consider an empty list to have a nil value. Secondly, this code tries to handle lists in the classic Lisp way, with recursion, and that's not what you typically do in Clojure.

Clojure's reliance on the JVM makes recursion inconvenient. And Clojure uses list comprehensions, along with very sophisticated, terse destructuring assignments, to churn through lists much more gracefully than my Common Lisp code above. Those 7 lines of Common Lisp compress to 2 lines of Clojure:

(defn build [seq1 seq2]
(for [elem1 seq1 elem2 seq2] [elem1 elem2]))

A friend of mine once said at a meetup that Clojure isn't really a Lisp; it's "a fucked-up functional language" with all kinds of weird quirks which uses Lisp syntax out of nostalgia more than anything else. To me, this isn't enough to earn Clojure that judgement, which was kinda harsh. I think I like Clojure more than he does. But, at the same time, if you're looking to translate stuff from other Lisps into Clojure, it's not going to be just copying and pasting. Beyond inconsequential, dialect-level differences like defn vs. defun, there are deeper differences which steepen the learning curve a little.

by Giles Bowkett (noreply@blogger.com) at January 07, 2015 10:34 AM

January 02, 2015

Greg Linden

Quick links

Some of the best of what I've been thinking about lately:
  • Tiny cheap satellites will provide near real-time imagery of the entire Earth to anyone who wants it, starting in about a year ([1] [2] [3])

  • Amplifying motion and color changes in video, which allows augmented perception ([1] [2])

  • Birds can hear the very low frequency sound produced by severe weather and are able to flee well in advance of incoming storms ([1])

  • Nice example of blending computer science with another field, in this case genealogy, to yield big new gains ([1])

  • "An energy gradient 1000 times greater than traditional particle accelerators" ([1])

  • People "don't want to watch commercials, are fleeing networks, hate reruns, are increasingly bored by reality programming, shun print products and, oh, by the way, don’t want to pay much for content either. Yikes." ([1] [2])

  • Everything we know Google is working on ([1])

  • Funny and informative: "Riding in a Google Self-Driving Car" ([1])

  • Google is rejecting security based on firewalls ([1] [2] [3])

  • "Whether you call it a Star Trek Universal Translator or Babel fish, Microsoft is building it, and it's incredible." ([1])

  • "Every dollar a worker earns in a research field spills over to make the economy $5 better off. Every dollar a similar worker earns in finance comes with a drain, making the economy 60 cents worse off." ([1])

  • "I’m a big believer in making effectively infinite computing resources available internally ... [Give] teams the resources they need to experiment ... All employees should be limited only by their ability rather than an absence of resources or an inability to argue convincingly for more." ([1])

  • "We think of it as a one-on-one tutor. It will test you and generate a personal lesson plan just for you." ([1])

  • "Apparently, a sufficient number of puppies can explain any computer science concept. Here we have multithreading:" ([1])

  • Fantastic to see a US president promoting computer programming to kids: "Becoming a computer scientist isn't as scary as it sounds. With hard work and a little math and science, anyone can do it." ([1])

by Greg Linden (noreply@blogger.com) at January 02, 2015 02:10 PM

December 31, 2014

Giles Bowkett

James Golick, Rest In Peace

A pic I took of James with Matz in 2010:



I met James at a Ruby conference, probably in 2008. Later, I stopped going to Ruby conferences, but we stayed in touch via email and text and very occasionally Skype. In 2011, I probably sent more drunk emails and/or texts to James than to any other person. Not 100% sure, I don't have precise statistics on this, for obvious reasons, but I hope it paints a picture.

I have very specific dietary restrictions that make travel a real pain in the ass for me, but I figured out some workarounds, and last October I went to New York for a Node.js conference. While there, I met up with James for drinks with a few other people from the Ruby world. The next day I dropped by his office because he wanted to show off his showroom. It was pretty awesome. He was stoked about his new job as CTO of Normal Ears, as well as his new apartment, and his relocation to New York in general. With a sometimes cynical sense of humor and a badass attitude, he was kind of like a born New Yorker. Like somebody who had finally found their ideal habitat.

The last thing James ever said to me was that it had been 4 years since we had last hung out in person, and I shouldn't make it four more. I made a mental note to figure out some excuse to come back to New York in 2015.

I really wish I was at his funeral right now.

Although I've met a ton of really smart people throughout my life, there have been very few that I ever really bothered to listen to, probably owing to my own numerous and severe personality problems. But I listened to James, I think more so than he guessed. After talking to James about jazz, I spent weeks and weeks on the harmonies and melodies in the music I made. After spying on his Twitter conversations about valgrind, I went and learned C. James was the only skeptic on Node.js I ever bothered taking seriously.

He was the best kind of friend: I would always hold myself to higher standards after talking to him.

Honestly, I cannot fucking comprehend his absence. It feels like some insane hoax. And although he was a good friend, he was a light presence in my life. For others, it must be so much worse. Utmost sympathies to his family and his other friends. This was an absolutely terrible loss.

by Giles Bowkett (noreply@blogger.com) at December 31, 2014 10:57 AM

December 30, 2014

Ian Bicking

Middleage

This year I’m starting to understand what it is to be middle aged. I think I became middle aged in 2011, but this year maybe I know what that is.

When I was young I viewed middle age through the lens of a young person. I would think: to be middle aged is all the things I’m not right now. To never be young again. To have many fewer Firsts ahead of me. And yes, I envy the idle freedom of my youth. To wander aimlessly.

But now, here, I am learning what middle age is, not just what it is not.

Death. I am losing friends, family. I am losing people who to me were permanent. Not rationally permanent, but still permanent. But this death is only the tip of the iceberg. To grow old… here, I can now catch glimpses of what it means. Either this death is just the tip of the iceberg, or I will be the tip of someone else’s iceberg. Both are possible.

Death and responsibility. I’m now the father of two. Many young people are the father of as many or more children than I. They are all middle aged, but many are too young to know it. I’m am more than old enough to know it, these are responsibilities that can never be shed. Having children has only revealed to me my real responsibilities… to family, to friends, to my community, even my responsibility to the missing communities, the missing friendships, the missed relations.

Death and responsibility and humility. I will never meet my responsibilities; I and everyone I know will die; after that nothing can be fixed. This is the foundation of my humility. It’s not my fault. To be humble is not to be ashamed or guilty. It’s to know I am only so tall, so strong, so brave: no matter how much I may accomplish all I do is finite and any quality I have is so much smaller than the world.

But I’m alive. If I’m halfway through, I’m still but half of what I’ll be. I am all of what I know. There is still a great mystery waiting me.

And children… the responsibility is only as heavy as their import. In them I am part of a legacy that goes back before humanity, a legacy that defines meaning itself. Of course it is heavy. It isn’t easy, this responsibility is not intended to make me happy, in it I learn that happiness is itself small.

And so I am humble. I bow before a world that owes me nothing. And of all that I ask of the world, little will be delivered. That little will be my everything. Here I stand before half of my everything and it is more than I’ll ever know and ever could know. I was never so young that I could know it, even my ignorance is too vast for me to know. I don’t even know where I stand, but maybe I know I am standing. This is my middle age.


In loving memory of my grandmother, Jeanetta Bicking, 1925-2014

by Ian Bicking at December 30, 2014 06:00 AM

December 26, 2014

ZZ85

Better Cubic Bezier Approximations for Robert Penner Easing Equations

To create motion on the screen, there are various approaches. You could hardcode or use a physics engine for example. One of the popular and proven approach is animation with Robert Penner’s easing equations. Now, animation using easing functions with cubic-bezier coordinates is gaining popularity, especially for web development with CSS. In this post I’ll would like to share how I’ve “eased” Robert Penner Easing equations into the cubic-bezier coordinates easing functions.

TL;DR? Check out the interactive example here.
Screenshot 2014-12-26 23.31.01

Easing

Screenshot 2014-12-26 23.29.04

Easing (also referred to as Tweening) is an interesting topic, and there is a great deal of good articles on the net. Likely you are familiar with this if you have done some animation (be it javascript, actionscript, css, glsl, flash, after effects), otherwise check out this good read.

Easing Equations
So you understand about easing but you may not know who Robert Penner is and what his easing equations are. For that, I would suggest reading the chapter from his book about Motion, Tweening, and Easing. It may be old, but this was probably what popularized or influenced the way programmers go about coding or thinking about animation. Penner’s equations has been implemented in various languages and for various platforms (even if it’s not, it’d be easy to). Pick any popular animation library and it already be using his easing equations.

For myself, I love tween.js implementation of the easing equations, not just because mrdoob uses it, but because it concisely simplifies the equations to a single factor k which you can easily use it anywhere (and there are others who are rediscovering that it can be done).

Cubic Bezier Curves
Cubic bezier curves are used in many applications especially in graphics, motion graphics and animation software (to list some: illustrator, sketch, after effects).For a very simple introduction to Cubic Bezier Curves, you can watch this video. For some reasons, I find cubic bezier both simple and complex – huge amount of possibilities can be created with just 2 control points.

Cubic Bezier Tools
Which would explain the number of popular cubic bezier tools for creating css animations. It was probably Matthew Lein’s Ceaser tool that introduced the concept of approximated Penner Equations with cubic beziers (which was tweaked by hand if I recall him correctly in a twitter conversation with Lea Verou).

Experimentations
I eventually found that the Ceaser easing functions made it to places like LESS css mixins, Sass/SCSS and Easings.net. I started extracting these values into a JSON format so it’d be easy to use in javascript.

The next thing I did was try plotting the cubic bezier on canvas and add animations for visualizations. What I observed was that Ceaser’s parameters typically followed the shape of Penner’s equations, however the resulting values were still an approximation. I stacked the graphed Robert Penner’s equations from tween.js with ceaser’s and there was some amount of deviations, which is amplified with side-by-side animations or when charted on a larger canvas. One way to improve these easing functions is to fix and adjust by importing it into a tool to tweak the control points until it match up more.

CustomMediaTimingFunction
Fortunately, I found another Objective-C library called CustomMediaTimingFunction that also tries approximating Robert Penner’s easing equations. My guess is that native cubic bezier support with CAMediaTimingFunction in IOS/Mac was another reason why someone wanted to convert Penner’s equations to cubic bezier functions.

As with the previous Ceaser’s equations, I extracted them to javascript, and draw them on the canvas with Penner’s equations. There were still some noticeable differences but I was pleasantly surprise to find them more accurate to Penner’s equations. I also found that KinkumaDesign’s equations were pretty off for the ease-out equations (possibly due to a different definition on what ease-out is). Another difference was that in addition to ease-in-out, there was ease-out-in easing functions. So generally KinkumaDesign’s was pretty good and perhaps what I needed to do is to hand tweak their values to make it better.

Curve Fitting
But I wasn’t satisfied with the thought of hand tweaking this values, even with a tool, and started thinking of how I could fit a bezier curve to match Penner’s equations. I remembered about a curve fitting algorithm by Philip J. Schneider (From 1990 Graphics Gems “An Algorithm for Automatically Fitting Digitized Curves”). I actually ported his C code to JS before but I just grabbed this gist, generated points from Penner’s equations and tested if the curve fitting algorithm worked. The result: the curve fitting seemed to generate bezier curve almost similar in shape to the original, but the differences was too great to be accurate. It might be possible that I could tweak the curve fitting parameters, but I didn’t want to go down that route and decided to do the brute force approach.

Brute Force
I decided that humans may not be precise when it comes to adjusting parameters, but the computer may be fast enough to try all the different combinations. The idea that I had for brute forcing is this: There are 2 control coordinates, which has a total of 4 values. If we subdivide each value to say 10, we will have 10×10 = 100 possibility for each control point. The estimated total combinations to check for is 100 * 100 = 10,000, which isn’t too bad. For each generated cubic bezier coordinates combination, I will generate the range of values and compare with the range of values generated with Penner’s equations. For comparing the ranges, the differences are squared individually and then sum. Comparing different combinations, only the coordinates which gives the least sum of squared differences will be kept. I ran this with node.js, and the subdivisions of 10-20 ran pretty quickly. Subdivisions of 25 (giving precision of up to 0.04) started give pretty accurate approximation, although the process starts to be slightly slower. For subdivision of 50 units (0.02 precision), it started to take a couple minutes to finish. In case you’re interested, the related node.js script (which is just a couple hundred of lines) is here.

So here are the bruteforce results which I think are pretty satisfactory

QuadIn: [ 0.26, 0, 0.6, 0.2 ]
QuadOut: [ 0.4, 0.8, 0.74, 1 ]
QuadInOut: [ 0.48, 0.04, 0.52, 0.96 ]
CubicIn: [ 0.32, 0, 0.66, -0.02 ]
CubicOut: [ 0.34, 1.02, 0.68, 1 ]
CubicInOut: [ 0.62, -0.04, 0.38, 1.04 ]
QuartIn: [ 0.46, 0, 0.74, -0.04 ]
QuartOut: [ 0.26, 1.04, 0.54, 1 ]
QuartInOut: [ 0.7, -0.1, 0.3, 1.1 ]
QuintIn: [ 0.52, 0, 0.78, -0.1 ]
QuintOut: [ 0.22, 1.1, 0.48, 1 ]
QuintInOut: [ 0.76, -0.14, 0.24, 1.14 ]
SineIn: [ 0.32, 0, 0.6, 0.36 ]
SineOut: [ 0.4, 0.64, 0.68, 1 ]
SineInOut: [ 0.36, 0, 0.64, 1 ]
ExpoIn: [ 0.62, 0.02, 0.84, -0.08 ]
ExpoOut: [ 0.16, 1.08, 0.38, 0.98 ]
ExpoInOut: [ 0.84, -0.12, 0.16, 1.12 ]
CircIn: [ 0.54, 0, 1, 0.44 ]
CircOut: [ 0, 0.56, 0.46, 1 ]
CircInOut: [ 0.88, 0.14, 0.12, 0.86 ]

If you prefer to to keep values clipped to 0..1, here are the parameters

QuadIn: [ 0.26, 0, 0.6, 0.2 ]
QuadOut: [ 0.4, 0.8, 0.74, 1 ]
QuadInOut: [ 0.48, 0.04, 0.52, 0.96 ]
CubicIn: [ 0.4, 0, 0.68, 0.06 ]
CubicOut: [ 0.32, 0.94, 0.6, 1 ]
CubicInOut: [ 0.66, 0, 0.34, 1 ]
QuartIn: [ 0.52, 0, 0.74, 0 ]
QuartOut: [ 0.26, 1, 0.48, 1 ]
QuartInOut: [ 0.76, 0, 0.24, 1 ]
QuintIn: [ 0.64, 0, 0.78, 0 ]
QuintOut: [ 0.22, 1, 0.36, 1 ]
QuintInOut: [ 0.84, 0, 0.16, 1 ]
SineIn: [ 0.32, 0, 0.6, 0.36 ]
SineOut: [ 0.4, 0.64, 0.68, 1 ]
SineInOut: [ 0.36, 0, 0.64, 1 ]
ExpoIn: [ 0.66, 0, 0.86, 0 ]
ExpoOut: [ 0.14, 1, 0.34, 1 ]
ExpoInOut: [ 0.9, 0, 0.1, 1 ]
CircIn: [ 0.54, 0, 1, 0.44 ]
CircOut: [ 0, 0.56, 0.46, 1 ]
CircInOut: [ 0.88, 0.14, 0.12, 0.86 ]

cubic

Observations

  • ease-in and ease-out typically fits pretty well
  • It is more difficult to fit the higher powered ease-in-out eg. QuintInOut. Sometimes the best fit creates a little bounce at the edges, which is then better to used the clipped parameters or adjust the control points a little. There are also certain easing equations eg. bounce which are not directly portable. For these cases, it might be better to multiple cubic-bezier-easing-functions.
  • getYforX() for a cubic bezier function is not simply P0 * ( 1 - u )3 + P1 * 3 * u * ( 1 - u )2 + P2 * 3 * u2 * ( 1 - u ) + P3 * u3. X needs to be re-parameterize to t before getting Y. A couple of npm modules are available if you’re lazy to implement this.

Possible Improvements

  1. smarter brute force – If we were to create more subdivisions for more accurate results, the entire brute force process would take exponentially longer. To reduce this time, a) we could run on multiprocessors, b) run it on the gpu eg.https://github.com/andyhall/glsl-projectron c) be a little smarter on which coordinates run brute forces on
  2. smarter fitting – perhaps there are better curve fitting algorithm that would make this bruteforce approach less relevant eg. this

While this may be something academic to do, I don’t think its pretty necessary right now. It’s probably more important to think what we can do with it, and what we can do going forward.

Conclusion & Going Forward
So in this post I’ve written a new cubic bezier tool, suggested a new set of Cubic Bezier approximations for Robert Penner’s easing equations, and showed a little of that process.

What I’ve done here might simply just a improvement from the Ceaser’s and KinkumaDesign’s parameters, or you could think of it as a “purer” form of Penner’s implementation with cubic bezier. Also I started thinking about these because of the possibilities of using Penner’s equations while making it editable easily using curves for my animation tool Timeliner. There might also be other curves and splines to consider, but allowing Penner’s equations to map to cubic bezier curves is a start. There is also a thread on better integrating editable and non-editable easing in blender which is worth considering.

Hopefully, someone finds this useful. The code is on github. Sorry if it looks like mess, it was hacked overnight.

Merry Christmas and have a happy new year!

by Zz85 at December 26, 2014 03:35 PM

December 22, 2014

Bret Victor

Biosimilarity

Enzymatic interaction

It is traditional for me, around this time of year, to provide holiday (brain) candy for folks in my technical community. This year's offering comes from a conversation with Marius Buliga about chemlambda in which he referred to β-reduction in the λ-calculus as an enzyme. This got me to thinking about explicitly modeling the COMM rule in the π-calculus as an enzyme. It turns out you can do this quite easily and it has a lot of applications!

For simplicity, i'll use the reflective, higher-order π-calculus, but extend the basic calculus with a primitive process, COMM:

P, Q ::= 0 | COMM | x?( y )P | x!( P ) | *x | P|Q
x, y ::= @P

(Note that i've adopted a slightly friendlier syntax since i first developed the reflective π-calculus. Instead of writing « » and » «, i now write @P and *x. This echoes the C and C++ programming language notation for acquiring a reference and dereferencing a pointer, respectively -- which is the key insight of the reflective calculus. We can get a kind of pointer arithmetic for channel names if we treat them as quoted process terms.)

Structural equivalence includes alpha-equivalence and makes ( P, | , 0 ) a commutative monoid. Of interest to us is

COMM | P = P | COMM

So, COMM mixes into a parallel composition.

Notice that COMM cannot penetrate an abstraction barrier, i.e. 


x?( y )P | COMM != x?( y )( P | COMM )
Enzymatic reduction in reflective π-calculus

There is an interesting design choice as to whether

x!( Q ) | COMM = x!( Q | COMM ) 

Treating COMM as enzymatic, the COMM-rule

x?( y )P | COMM | x!( Q )  P{ @Q/y } | COMM 

results in the standard dynamics, if there is just one COMM process in the expression. If there are more COMM processes, then we get a spectrum of so-called true concurrency semantics, with the usual true concurrency semantics identified with the limiting equation

COMM = COMM | COMM

which allows to saturate a parallel composition with COMM in every place where an interaction could take place.

Writing COMMi  for COMM | ... | COMM ( i times ), another set of dynamics arise from rules of the form

x?( y )P | COMMn | x!( Q )  P{ @Q/y } | COMMm

with m < n

This bears resemblance to the Ethereum platform's idea of "gas" for a smart contract's calculation. Mike Stay points out that associating a cost with communication makes it look like an action in physics. Notice that you can get some very interesting dynamics with COMM-consuming interaction rules in the presence of COMM-releasing abstractions.

One example interpretation that i think works well and provides significant motivation is to treat COMM as a compute node. A finite number of COMMs represent a finite number of nodes on which processes can run. Likewise, it is possible to model acquisition and release of nodes with cost-based COMM-rules and certain code factorings. For example, call a process inert if P | COMM cannot reduce and purely inert if P inert and P != Q | COMM for any Q. Now, suppose

COMM: x?( y )P | COMMn | x!( Q )  P{ @( Q | COMM )/y } | COMMn-1

and only allow purely inert processes in output position, i.e. x!( Q ), with Q inert. Then interaction acquires a node and dereference, *x releases one. For example,

x?( y )( *y ) | COMM | x!( Q ) 
 
*@( COMM | Q ) 
 
COMM | Q

In terms of possible higher category theory models, the reification of the 2-cell-like COMM-rule as a 1-cell-like COMM process ought to provide a fairly simple way to make one of Mike Stay's recent higher category models work without suffering too many reductions, which has been the principal stumbling block of late. Notice that in this setting we can treat x?( y )( COMM | P ) as an explicit annotation by the programmer that interaction under an abstraction is admitted. If they want to freeze abstraction contexts, they just don't put COMM under an abstraction where an interaction could take place. In more detail

x?( y )( u?( v )P | COMM | u!( Q ) )

constitutes a programmer-supplied annotation, it's okay to reduce under the ( y )-abstraction, while

x?( y )( u?( v )P | u!( Q ) )

is an indication that the term under this abstraction is frozen, until it is released by communication on x.


Compositional interpretations of physical processes


This very simple insight of Marius' that interaction is enzymatically enabled creates a wide variety of dynamics that have broad range of applications! Merry Christmas to all and a Happy New Year!

by leithaus (noreply@blogger.com) at December 22, 2014 11:53 AM

Bret Victor

December 17, 2014

Giles Bowkett

RobotsConf 2014: ZOMG, The Swag

At Empire Node, the swag bag included a portable USB charger. It kind of boggled my mind, because it was almost the only truly useful thing I'd ever received in a swag bag. That was a few months ago.

Since then, my concept of conference swag has kind of exploded. The swag bag at RobotsConf was insane. In fact, the RobotsConf freebies were already crazy delicious before the conf even began.

About a month before the conference, RobotsConf sent a sort of "care package" with a Spark Core, a Spark-branded Moleskine notebook (or possibly just a Spark-branded Moleskine-style notebook?), and a RobotsConf sticker.

At the conference, the swag bag contained a Pebble watch, an Electric Imp, an ARDX kit (Arduino, breadboard, speaker, dials, buttons, wires, resistors, and LEDs), a SumoBot kit (wheels, servos, body, etc.), a little black toolbox, a Ziplock bag with several AA batteries, and a RobotsConf beanie. There were a ton of stickers, of course, and you could also pick up a bright yellow Johnny Five shirt.



Many people embedded LEDs in the eyes of their RobotsConf hats, but I wasn't willing to risk it. I live in an area with actual snow these days, so I plan to get a lot of practical usefulness out of this hat.

Spark handed out additional free Spark Cores at the conference, so I actually came home with two Spark Cores. This means, in a sense, that I got five new computers out of this: the Pebble, the Arduino in the ARDX kit, the Electric Imp, and both Spark Cores. Really just microcontrollers, but still exciting. And of these five devices, the Pebble can connect to Bluetooth, while the Imp and Spark boxes can connect to WiFi.

Technical Machines didn't include Tessel microcontrollers in the swag bag, but they did set up a table and loan out a bunch of microcontrollers and modules to play with. I saw one developer code up a point-and-shoot camera in CoffeeScript, in about a minute. (The Tessel runs JavaScript, including most Node code.)

Likewise, although you couldn't take them home on the plane with you, there were a bunch of 3D printers you could experiment with. All in all, an amazing geeky playground. The only downside is that presented a tough act to follow, for Santa Claus (and/or Hanukkah Harry).

by Giles Bowkett (noreply@blogger.com) at December 17, 2014 05:03 PM

December 12, 2014

Giles Bowkett

Nodevember: Make Art, Not Apps

I nearly went to this conference, but I'm aiming for a max of one conf per every two months (and even this schedule will likely let up after a while). Even kicked around the idea of sponsoring it, because it looked cool. Turns out, it was cool.

by Giles Bowkett (noreply@blogger.com) at December 12, 2014 04:24 PM

December 10, 2014

Terry Jones

I go shopping for a compass, then my Sonos decides it needs one too

Screen Shot 2014-12-10 at 3.42.33 PMLast night I spent some time online looking to buy a compass. I looked at many of the Suunto models. Also yesterday I installed Little Snitch after noticing that an unknown gamed process wanted to establish a TCP/IP connection.

Anyway… a few minutes ago, 10 or 11 hours after I eventually bought a compass, a message pops up from Little Snitch telling me that the Mac OS X desktop Sonos app was trying to open a connection to ns.suunto.com. See image on right (click for the full-sized version).

WTF?!?!

Can someone explain how that works? Either Little Snitch is mightily confused or…… Or what? Has the Sonos app has been digging around in my Chrome browser history or cache or cookie jar? Is Chrome somehow complicit locally? Or something with cookies (but that would require the Sonos to be accessing and sending cookies stored by Chrome). Or…. what?

And, why ns.suunto.com? There’s an HTTP server there, but its / resource is not very informative:

$ curl -S -v http://ns.suunto.com
* Rebuilt URL to: http://ns.suunto.com/
* Hostname was NOT found in DNS cache
*   Trying 23.63.99.202...
* Connected to ns.suunto.com (23.63.99.202) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.37.1
> Host: ns.suunto.com
> Accept: */*
>
 HTTP/1.1 404 Not Found
* Server Apache is not blacklisted
 Server: Apache
 Content-Type: text/html; charset=iso-8859-1
 Date: Wed, 10 Dec 2014 16:02:47 GMT
 Content-Length: 16
 Connection: keep-alive

* Connection #0 to host ns.suunto.com left intact
File not found."⏎

Unfortunately, Little Snitch doesn't tell me the full URL that the Sonos app was trying to access.

Anyone care to speculate what's going on here?

by terry at December 10, 2014 04:12 PM

December 09, 2014

Giles Bowkett

RobotsConf 2014: Simple Hack with Leap Motion and Parrot Drone

RobotsConf was great fun, and oddly enough, it reminded me of Burning Man in two ways.

First, it was overwhelming, in a good way. Second, there was so much to do that the smart way to approach it is probably the same as the smart way to approach Burning Man: go without a plan the first year, make all kinds of crazy plans and projects every subsequent year.

The conf's split between a massive hackerspace, a roughly-as-big lecture space, and a small drone space. You can hop between any/all of these rooms, or sit at various tables outside all these rooms. The table space was like half hallway, half catering zone. Outside, there were more tables, and people also set up a rocket, a small servo-driven flamethrower (consisting of a Zippo and a can of hairspray), and a kiddie pool for robot boats.

I arrived with no specific plans, and spent most of the first day in lectures, learning the basics of electronics, robotics, and wearable tech. But I also took the time, that first day, to link up a Parrot AR drone with a Leap Motion controller.


Sorry Rubyists - the code on this one was all Node.js. Artoo has support for both the Leap Motion and the Parrot AR, but Node has embraced hardware hacking, where Ruby (except for Artoo) kinda remains stuck in 2010.

I started with this code, from the node-ar-drone README:


With this, I had the drone taking off into the air, wandering around for a bit, doing a backflip (or I think, more accurately, a sideflip), and then returning to land. Then I plugged in the Leap Motion, and, using Leap's Node library, it was very easy to establish basic control over the drone. The control was literally manual, in the classic sense of the term - I was controlling a flying robot with my hand.

Here's the code:


As you can see, it's very straightforward.

With this code, when you first put your hand over the Leap Motion, the drone takes off. If you hold your hand high above the Leap, the drone ascends; if you lower it, the drone descends. If you take your hand away completely, the drone lands.

The frame.gestures.forEach bit was frankly just a failure. I got no useful results from the gesture recognition whatsoever. I want to be fair to Leap, though, so I'll just point out that I hacked this whole thing together inside about twenty minutes. (Another caveat, though: that twenty minutes came after about an hour or so of utter puzzlement, which ended when I enabled "track web apps" in the Leap's settings, and I got a bunch of help on this from Andrew Stewart of the Hybrid Group.)

Anyway, I had nothing but easy sailing when it came to the stuff which tracks pointables and obtains their X, Y, and Z co-ordinates. I ran off to a lecture after I got this far, but it would be very easy to add turning based on X position, or forward movement based on Z position. If you read the code, you can probably even imagine how that would look. Also, if I'd had a bit more time, I think I probably could have synced flip animations to gestures.

In fact, I've lost the link, but I believe I saw that a pre-teen girl named Super Awesome Maker Sylvia did some of these things at RobotsConf last year, and a GitHub project already exists for controlling Parrot drones with Leap Motion controllers (it's a Node library, of course). There was a small but clearly thrilled contingent of young kids at RobotsConf, by the way, and it was pretty amazing to see all the stuff they were doing.

by Giles Bowkett (noreply@blogger.com) at December 09, 2014 11:29 AM

Decyphering Glyph

Docker Dev to Prod in Just A Few Easy Steps

It seems that docker is all the rage these days. Docker has popularized a powerful paradigm for repeatable, isolated deployments of pretty much any application you can run on Linux. There are numerous highly sophisticated orchestration systems which can leverage Docker to deploy applications at massive scale. At the other end of the spectrum, there are quick ways to get started with automated deployment or orchestrated multi-container development environments.

When you're just getting started, this dazzling array of tools can be as bewildering as it is impressive.

A big part of the promise of docker is that you can build your app in a standard format on any computer, anywhere, and then run it. As docker.com puts it:

“... run the same app, unchanged, on laptops, data center VMs, and any cloud ...”

So when I started approaching docker, my first thought was: before I mess around with any of this deployment automation stuff, how do I just get an arbitrary docker container that I've built and tested on my laptop shipped into the cloud?

There are a few documented options that I came across, but they all had drawbacks, and didn't really make the ideal tradeoff for just starting out:

  1. I could push my image up to the public registry and then pull it down. While this works for me on open source projects, it doesn't really generalize.
  2. I could run my own registry on a server, and push it there. I can either run it plain-text and risk the unfortunate security implications that implies, deal with the administrative hassle of running my own certificate authority and propagating trust out to my deployment node, or spend money on a real TLS certificate. Since I'm just starting out, I don't want to deal with any of these hassles right away.
  3. I could re-run the build on every host where I intend to run the application. This is easy and repeatable, but unfortunately it means that I'm missing part of that great promise of docker - I'm running potentially subtly different images in development, test, and production.

I think I have figured out a fourth option that is super fast to get started with, as well as being reasonably secure.

What I have done is:

  1. run a local registry
  2. build an image locally - testing it until it works as desired
  3. push the image to that registry
  4. use SSH port forwarding to "pull" that image onto a cloud server, from my laptop

Before running the registry, you should set aside a persistent location for the registry's storage. Since I'm using boot2docker, I stuck this in my home directory, like so:

1
me@laptop$ mkdir -p ~/Documents/Docker/Registry

To run the registry, you need to do this:

1
2
3
4
5
6
7
8
me@laptop$ docker pull registry
...
Status: Image is up to date for registry:latest
me@laptop$ docker run --name registry --rm=true -p 5000:5000 \
    -e GUNICORN_OPTS=[--preload] \
    -e STORAGE_PATH=/registry \
    -v "$HOME/Documents/Docker/Registry:/registry" \
    registry

To briefly explain each of these arguments - --name is just there so I can quickly identify this as my registry container in docker ps and the like; --rm=true is there so that I don't create detritus from subsequent runs of this container, -p 5000:5000 exposes the registry to the docker host, -e GUNICORN_OPTS=[--preload] is a workaround for a small bug, STORAGE_PATH=/registry tells the registry to look in /registry for its images, and the -v option points /registry at the directory we previously created above.

It's important to understand that this registry container only needs to be running for the duration of the commands below. Spin it up, push and pull your images, and then you can just shut it down.

Next, you want to build your image, tagging it with localhost.localdomain.

1
2
me@laptop$ cd MyDockerApp
me@laptop$ docker build -t localhost.localdomain:5000/mydockerapp .

Assuming the image builds without incident, the next step is to send the image to your registry.

1
me@laptop$ docker push localhost.localdomain:5000/mydockerapp

Once that has completed, it's time to "pull" the image on your cloud machine, which - again, if you're using boot2docker, like me, can be done like so:

1
2
3
me@laptop$ ssh -t -R 127.0.0.1:5000:"$(boot2docker ip 2>/dev/null)":5000 \
    mycloudserver.example.com \
    'docker pull localhost.localdomain:5000/mydockerapp'

If you're on Linux and simply running Docker on a local host, then you don't need the "boot2docker" command:

1
2
3
me@laptop$ ssh -t -R 127.0.0.1:5000:127.0.0.1:5000 \
    mycloudserver.example.com \
    'docker pull localhost.localdomain:5000/mydockerapp'

Finally, you can now run this image on your cloud server. You will of course need to decide on appropriate configuration options for your applications such as -p, -v, and -e:

1
2
3
4
me@laptop$ ssh mycloudserver.example.com \
    'docker run -d --restart=always --name=mydockerapp \
        -p ... -v ... -e ... \
        localhost.localdomain:5000/mydockerapp'

To avoid network round trips, you can even run the previous two steps as a single command:

1
2
3
4
5
6
me@laptop$ ssh -t -R 127.0.0.1:5000:"$(boot2docker ip 2>/dev/null)":5000 \
    mycloudserver.example.com \
    'docker pull localhost.localdomain:5000/mydockerapp && \
     docker run -d --restart=always --name=mydockerapp \
        -p ... -v ... -e ... \
        localhost.localdomain:5000/mydockerapp'

I would not recommend setting up any intense production workloads this way; those orchestration tools I mentioned at the beginning of this article exist for a reason, and if you need to manage a cluster of servers you should probably take the time to learn how to set up and manage one of them.

However, as far as I know, there's also nothing wrong with putting your application into production this way. If you have a simple single-container application, then this is a reasonably robust way to run it: the docker daemon will take care of restarting it if your machine crashes, and running this command again (with a docker rm -f mydockerapp before docker run) will re-deploy it in a clean, reproducible way.

So if you're getting started exploring docker and you're not sure how to get a couple of apps up and running just to give it a spin, hopefully this can set you on your way quickly!

(Many thanks to my employer, Rackspace, for sponsoring the time for me to write this post. Thanks also to Jean-Paul Calderone, Alex Gaynor, and Julian Berman for their thoughtful review. Any errors are surely my own.)

by Glyph at December 09, 2014 02:14 AM

December 04, 2014

Giles Bowkett

Questions Worth Asking

What is programming?



How do you choose an open source project?



What are the consequences of open source?


by Giles Bowkett (noreply@blogger.com) at December 04, 2014 07:20 PM

December 03, 2014

Greg Linden

More quick links

More of what caught my attention lately:
  • "Make infinite computing resources available internally ... Give teams the resources they need to experiment ... All employees should be limited only by their ability rather than an absence of resources or an inability to argue convincingly for more." ([1] [2])

  • "Accept that failures will always happen and guard ... [against] cascading failures by purposefully causing failures" ([1] [2])

  • "The importance of Netflix’s recommendation engine is actually underestimated" ([1] [2])

  • Courts are getting more skeptical about software patents ([1])

  • Nice way of putting it: "The prevailing business culture in the banking industry weakens and undermines the honesty norm" ([1] [2])

  • "[On] the overcrowded, overstuffed, slow-loading web, you are bound to see a carnival of pop-ups and interstitials — interim ad pages served up before or after your desired content — and scammy come-ons daring you to click. Is it any wonder, really, that this place is dying?" ([1])

  • A very effective social engineering attack "compromised the accounts of C-level executives, legal counsel, regulatory and compliance personnel, scientists, and advisors of more than 100 [major] companies" ([1])

  • An 11 hour Microsoft Azure cloud service outage that impacted just about everyone using it worldwide, including internal users like MSN.com and Xbox Live ([1])

  • Stack traces at arbitrary break points in Google's cloud services running live with near zero overhead ([1] [2])

  • Free SSL certificates (for HTTPS) from a non-profit out of EFF, Mozilla, Cisco, and Akamai ([1])

  • The journal Nature makes its papers free for everyone to read ([1] [2])

  • Combining neural networks like components yields new breakthroughs ([1] [2])

  • Robotics guru Rodney Brooks says, "Relax. Chill ... [The press has a] misunderstanding of how far we really are from having volitional or intentional artificially intelligent beings." ([1])

  • Undersea drones are enabling new feats: "The first time ... the black sea devil anglerfish ... has been filmed alive and in its natural habitat" ([1])

  • Bats jam the sonor of other bats when they're both trying to catch the same insect. It's like a dogfight up there. ([1])

  • Great tutorial on CSS and HTML just launched by Khan Academy and jQuery's John Resig ([1])

  • Fun visualization of the periodic table by how common the elements are in the earth's crust, ocean, human body, and sun ([1])

  • Hilarious parody of the Amazon Echo promotional video ([1])

  • South Park has a surprisingly good (and funny) criticism of freemium games that gets all the issues correct around preying on people with a tendency toward compulsive gambling ([1] [2])

  • Great Dilbert comic on how engineers think of marketing ([1])

  • Good Xkcd comic on over-optimization ([1])

  • Loved this SMBC comic: "He said I wasn't very good at math" ([1]) 

by Greg Linden (noreply@blogger.com) at December 03, 2014 05:50 PM

December 02, 2014

Alarming Development

New Subtext screencast

We’ve published the final videos from the Future Programming Workshop. We will also be publishing a final report about our experiences and lessons from the workshop.

Included in the videos is my latest screencast about Subtext: Two-way Dataflow. The abstract:

Subtext is an experiment to radically simplify application programming. The goal is to combine the power of frameworks like Rails and iOS with the simplicity of a spreadsheet. The standard MVC architecture of such frameworks makes execution order hard to understand, a problem colloquially called callback hell. I propose a new approach called two-way dataflow, which breaks the program into cyclic output and input phases. Output is handled with traditional one-way dataflow, which is realized here as a form of pure lazy functional programming. Input is governed by a new semantics called one-way action which is a highly restricted form of event-driven imperative programming. These restrictions statically order event execution to avoid callback hell. Two-way dataflow has been designed not only to simplify the semantics of application programming but also to support a presentation that, like a spreadsheet, provides a fully WYSIWYG programming experience.

Comments welcome.

by Jonathan Edwards at December 02, 2014 03:51 PM

Giles Bowkett

Two Recent Animations

I've been studying animation recently. As we're approaching the end of the semester, I've had to turn in my final projects. Here they are.

For this one, I created the video in Cinema 4D and Adobe After Effects, and I made the music with Ableton Live. Caveat: it looks better on my machine. YouTube's compression has not been as kind as I would have hoped.



The assignment for this one was to create an intro credits sequence for an existing film, and I chose Scott Pilgrim vs. The World. The film's based on a series of graphic novels, which I own, so I scanned images from the comics, tweaked them in Photoshop, and added color by animating brush strokes in After Effects.

The soundtrack's a song called "Scott Pilgrim," by the Canadian band Plumtree. The author of the comics named the character after the song.



I figured I had aced the basic skill of coloring in a black-and-white image back when I was five, with crayons, but it was actually an arduous process. If you count brush stroke effects as layers, two comps in this animation had over 300 layers.

Ironically, I picked this approach because I had limited time. I had to turn in both projects a little early. On the last day of class, I'll be on an airplane back from RobotsConf. I have to give a big shout-out to my employer, Panda Strike, not just for sending me to this awesome conference, which I'm very excited about, but also for being the kind of company which believes in flexible scheduling and remote work. Without flexible scheduling and remote work, I would have a much harder time studying animation.

by Giles Bowkett (noreply@blogger.com) at December 02, 2014 10:56 AM

November 30, 2014

Giles Bowkett

Rest In Peace Ezra Zygmuntowicz

Ezra Zygmuntowicz, creator of Merb and co-founder of Engine Yard, has passed away.

Some notes from Hacker News:

"He was funny, patient, and most of all kind."

"We [left Engine Yard for] our own colo but Ezra helped us at every step of the way, long after it was clear we weren't coming back."

From @antirez: "Ezra was the first to start making Redis popular..."

"Ezra was a innovator in the glass pipe world. A world class artist that reinvented lampworking."

"He used to fly little radio controlled helicopters all over our office at Engine Yard."

I had a conference call with Ezra in 2006 as part of a project and was a total fanboy about it. His work on the Yakima Herald, before he founded Engine Yard, was one of the main things that made me start taking Rails seriously back in those days, back before the hype train really even began. One time we shared a car to the airport after a conference, and of course I saw him at a ton of conferences beyond that as well. He was a very cool guy, and he'll be missed.

by Giles Bowkett (noreply@blogger.com) at November 30, 2014 11:39 AM

November 29, 2014

Toolness

On Gaming And Media Narratives

On December 13, 2013, I sent the following email to several of my friends who play videogames:

Hey, if you’re receiving this it’s because you’re on my Steam friends list. I don’t send spam out often but right now I am frustrated with the collective hatred of the internet and this is the only way I can think of fighting back.

Earlier this year, I played a web-based game called Depression Quest. It’s not particularly “fun”, because it’s about depression, but it is very good at building awareness about, and empathy for, a serious mental condition.

The creator happens to be a woman and has been harassed by the internet. The game, while free, is trying to get on Steam and a bunch of internet assholes are down-voting the game because misogyny.

So, if you either like the premise of the game or despise misogyny (or both!), I encourage you to vote for the game on Steam Greenlight using the link below:

http://steamcommunity.com/sharedfiles/filedetails/?id=200770535

That is all. Thanks for reading this, and apologies if this is spam to you.

The above email was the only “mass email” I’ve sent in at least the past two years. I was pretty frustrated at the time.

What’s odd, though, is that I didn’t do a whole lot of research before writing that email: I followed Depression Quest’s author, Zoe Quinn, on Twitter, and saw her complaining about being harassed over the phone, and I saw a post or two from a few game journalism sites about it, which convinced me that a mob of angry misogynists were harassing her.

The truth, as far as I can tell, is extremely murky. There’s a YouTube video from Some Asian Guy (literally his username) who paints an unflattering picture of an extremely manipulative Zoe Quinn taking 2 angry posts from a website for depressed, suicidal male virgins, framing them as harassment, entirely fabricating claims of phone calls and “raids” on her, and then getting her game journalist friends to write about it as a mechanism to garner sympathy and publicity for herself and her game on Steam Greenlight. The whole story is also curiously documented in a series of very large images with colorful text on imgur.

What’s interesting, though, is that this isn’t an isolated incident. Others have taken place over the past year; many in the gaming community seem to constantly be accusing Quinn of using bullying tactics to sabotage competitors’ projects, claiming harassment when none has occurred, and taking advantage of the media’s sympathy towards women in gaming for personal gain.

Rhetorically, it’s difficult to question and investigate Quinn’s claims of harassment because doing so is often interpreted as victim blaming. But if this were mere sexist victim blaming, it would make sense for all women who claim harassment in the video game industry to be constantly doubted, not just Quinn. And yet this doesn’t appear to be the case: for example, 2012′s #1reasonwhy hashtag, which was used to document the rampant sexism women in the game industry face, didn’t receive significant push-back from the gaming community. Indeed, if what is said about Quinn is true, in some ways #1reasonwhy set the stage for Quinn to pull it off, since her claims of harassment fit perfectly with the media narrative the hashtag established.

And that’s what concerns me about all this: the dominant media narrative here is that when gamers meet women, misogyny happens, and that’s it. Every major gaming outlet refused to acknowledge claims against Quinn, and indeed always jumped to her defense, framing the story in the dominant narrative.

This narrative is so pervasive that even my favorite podcast that analyzes journalistic coverage, On The Media, is unable to see through it. Further evidence implicating Quinn in affairs with game journalists was released in August and ignited a movement called GamerGate, but coverage was yet again framed in the dominant narrative: righteous outrage shaming gamers for being misogynistic.

To be clear, GamerGate is about much more than Zoe Quinn; it’s about the gaping chasm that has grown between the gaming community and game journalism over the past few years, which Erik Kain at Forbes does an incredible job at covering. Sadly, though, he’s one of a small handful; On The Media only interviewed Chris Grant, the editor-in-chief of Polygon, who is not part of said handful. In fact, he is a member of an exclusive mailing list that many in GamerGate feel is partly responsible for the widening of the chasm.

Because the causes of the chasm are vast and varied, so too are the people and perspectives that comprise GamerGate. And yes, some of them are people who fit the media narrative perfectly: disgruntled men who dislike women in gaming, don’t like games like Depression Quest and will bully people about it.

However, GamerGate is also comprised of thoughtful, compassionate people. People who are alarmed by the one-sided, uncritical nature of the dominant media narrative and want to see it changed. Parents of autistic children who feel the gaming press unfairly vilifies social awkwardness. Individuals who don’t believe it’s ethical for game journalists to be wined-and-dined by the people who made the games they’re reviewing. People who have been harassed and doxxed by anti-GamerGaters without provocation. People who hate harassment and un-dox victims on both sides.

I have no idea what the truth is. But I’m certain it isn’t as simple as a giant mob of angry misogynists harassing women in gaming, and I wish a few more media outlets than Forbes would start acknowledging that.

by Atul at November 29, 2014 12:44 PM

November 28, 2014

Decyphering Glyph

Public or Private?

If I am creating a new feature in library code, I have two choices with the implementation details: I can make them public - that is, exposed to application code - or I can make them private - that is, for use only within the library.

https://www.flickr.com/photos/skyrim/6518329775/

If I make them public, then the structure of my library is very clear to its clients. Testing is easy, because the public structures may be manipulated and replaced as the tests dictate. Inspection is easy, because all the data is exposed for clients to manipulate. Developers are happy when they can manipulate and test things easily. If I select "public" as the general rule, then developers using my library will be happy, because they'll be able to inspect and test everything quite easily whether I specifically designed in support for that or not.

However, now that they're public, I have to support them in their current form into the forseeable future. Since I tend to maintain the libraries I work on, and maintenance necessarily means change, a large public API surface means a lot of ongoing changes to exposed functionality, which means a constant stream of deprecation warnings and deprecated feature removals. Without private implementation details, there's no axis on which I can change my software without deprecating older versions. Developers hate keeping up with deprecation warnings or having their applications break when a new version of a library comes out, so if I adopt "public" as the general rule, developers will be unhappy.

https://www.flickr.com/photos/lisasaunders18/14061397865

If I make them private, then the structure of my library is a lot easier to understand by developers, because the API surface is much smaller, and exposes only the minimum necessary to accomplish their tasks. Because the implementation details are private, when I maintain the library, I can add functionality "for free" and make internal changes without requiring any additional work from developers. Developers like it when you don't waste their time with trivia and make it easier to get to what they want right away, and they love getting stuff "for free", so if I adopt "private" as the general rule, developers will be happy.

However, now that they're private, there's potentially no way to access that functionality for unforseen use-cases, and testing and inspection may be difficult unless the functionality in question was designed with an above-average level of care. Since most functionality is, by definition, designed with an average level of care, that means that there will inevitably be gaps in these meta-level tools until the functionality has already been in use for a while, which means that developers will need to report bugs and wait for new releases. Developers don't like waiting for the next release cycle to get access to functionality that they need to get work done right now, so if I adopt "private" as the general rule, developers will be unhappy.

Hmm.

by Glyph at November 28, 2014 11:25 PM

November 26, 2014

Lambda the Ultimate

John C Reynolds Doctoral Dissertation Award nominations for 2014

Presented annually to the author of the outstanding doctoral dissertation in the area of Programming Languages. The award includes a prize of $1,000. The winner can choose to receive the award at ICFP, OOPSLA, POPL, or PLDI.

I guess it is fairly obvious why professors should propose their students (the deadline is January 4th 2015). Newly minted PhD should, for similar reasons, make sure their professors are reminded of these reasons. I can tell you that the competition is going to be tough this year; but hey, you didn't go into programming language theory thinking it is going to be easy, did you?

November 26, 2014 10:05 PM

November 22, 2014

Lambda the Ultimate

Zélus : A Synchronous Language with ODEs

Zélus : A Synchronous Language with ODEs
Timothy Bourke, Marc Pouzet
2013

Zélus is a new programming language for modeling systems that mix discrete logical time and continuous time behaviors. From a user's perspective, its main originality is to extend an existing Lustre-like synchronous language with Ordinary Differential Equations (ODEs). The extension is conservative: any synchronous program expressed as data-flow equations and hierarchical automata can be composed arbitrarily with ODEs in the same source code.

A dedicated type system and causality analysis ensure that all discrete changes are aligned with zero-crossing events so that no side effects or discontinuities occur during integration. Programs are statically scheduled and translated into sequential code that, by construction, runs in bounded time and space. Compilation is effected by source-to-source translation into a small synchronous subset which is processed by a standard synchronous compiler architecture. The resultant code is paired with an off-the-shelf numeric solver.

We show that it is possible to build a modeler for explicit hybrid systems à la Simulink/Stateflow on top of an existing synchronous language, using it both as a semantic basis and as a target for code generation.

Synchronous programming languages (à la Lucid Synchrone) are language designs for reactive systems with discrete time. Zélus extends them gracefully to hybrid discrete/continuous systems, to interact with the physical world, or simulate it -- while preserving their strong semantic qualities.

The paper is short (6 pages) and centered around examples rather than the theory -- I enjoyed it. Not being familiar with the domain, I was unsure what the "zero-crossings" mentioned in the introductions are, but there is a good explanation further down in the paper:

The standard way to detect events in a numeric solver is via zero-crossings where a solver monitors expressions for changes in sign and then, if they are detected, searches for a more precise instant of crossing.

The Zélus website has a 'publications' page with more advanced material, and an 'examples' page with case studies.

November 22, 2014 06:31 PM

November 19, 2014

Giles Bowkett

Text Animators

Another quick animation in After Effects, with original music by me.

by Giles Bowkett (noreply@blogger.com) at November 19, 2014 12:13 PM

November 15, 2014

ZZ85

Resizing, Moving, Snapping Windows with JS & CSS

Imagine you have a widget written in CSS, how would you add some code so it would get some ability to resize itself? The behaviour is so ingrained with our present windows managers or GUI that its quite easily taken for granted. While, it’s quite possible that there might be some plugins or framework which does this, but the challenge I gave myself was to do it in vanilla javascript, and to handle the resizing without adding more divs to the dom. (I thought adding additional divs to use as a draggable bar is pretty common).

Past work
Which reminds me, I wanted this similar behaviour for ThreeInspector, and while hacking the idea, I went with the approach of using the CSS3 resize property for the widget. The unfortunate thing was that min-width and min-height was broken for a really long time in webkit (the bug was filed long ago in 2011, and I’m not entirely sure what the status is now). Being bitten by the bug, I become hesitant every time I think of the css3 resize approach.

Screenshot 2014-11-15 09.31.08

JS Resizing
So, for own my challenge, I start with a single purple div and add a bit of js.

resize1

Done within 100 lines of code, this turns out not to be difficult. The trick is adding a mousemouse handler to document (not document.body, as it fails in FF), and calculate when the mouse is within a margin from the edge of the div. Another reason to always add handlers to document instead of a target div is when you need mouse events even if the cursor moves out of the defined boundary. This is useful for dragging and resizing behaviours, and especially in resizing, you wouldn’t want to waste time hunting bugs because the events and divs resizing are not in sync.

Also for my first time, I made extensive use of document’s event.clientX, event.clientY, together with div.getBoundingClientRect(). It does get me almost everything I need to deal for handling positions, size and events, although it’s a possibility that getBoundClientRect might not be as performant as getting offsets.

What’s nice about using JS vs a pure CSS3-resize is that you get to decide which sides of the div you wish to allow resizing. I went for the 4 sides and 4 corners, and the fun just started, so next I started implementing moving.

Moving
Handling basic moving / dragging just needs a few more lines of code. Pseduocode: Mousedown, check that cursor isn’t on the edge (reserved for resizing), store where the cursor and the bounds of the box is. Mousemove, update the box’s position.

Still simple, so let’s try the next challenge of snapping the box to the edges.

Snapping
Despite the bad things Mac might say to PC, one thing that is pretty good since Windows 7 is its snap feature. On my mac I use Spectacle, which is the replacement for Window’s windows docking management. I took inspiration of this feature in Windows and implemented this with JS and CSS.

resize2

One sweet detail in Snap is the way a translucent window shows where the window would dock or snap into place before you release your mouse. So in my implementation, I used an additional div with a slight transparency one z-index lower than the div I’m dragging. Css animation transition property was used for a more organic experience.

There’s slight deviations to the actual Aero’s experience that Windows users may notice. In Windows, dragging a window to the top snaps the window full screen, while dragging the window to the bottom of the screen has no effect. In my implementation, the window can be docked to the upper half or lower half, or the fullscreen if the window get dragged further beyond the edge of the screen. In Windows, a vertical half is only possible with the keyboard shortcut.

Another difference is that Windows snaps happen when the cursor touch the edge of the screen. My implementation snaps when the div’s edge touches the browser window edge. I thought this might be a better, because users typically use less movements for non-operating-system gesutures. One last difference is that Windows’ implementation sends tiny ripples at the point the cursor touches the screen. Ripples are nice (I noticed they are an element used frequently in Material Design), but I’ll leave it to be an exercise for another time.

As after thoughts, I added touch support and limit mousemove updates to requestAnimationFrame. Here’s the demo, feel free to try and check out the code on codepen.

See the Pen Resize, Drag, Snap by zz85 (@zz85) on CodePen.

by Zz85 at November 15, 2014 01:47 AM

November 05, 2014

Giles Bowkett

GlideRoom: Hangouts Without The Hangups

Many years ago, a friend of mine took a picture of me with a dildo stuck to my face.



The worst thing about this picture is that I didn't have it on my hard drive. I found it via Google Images. But the best part is it took me at least 3 minutes to find it, so, by modern standards, it's pretty obscure. Or at least it was, until I put it here on my blog again.

(Actually, the worst thing about this image is that I just found out it's now apparently being used to advertise porn sites, without my knowledge, consent, or participation.)

Anyway, back in the day, this picture went on Myspace, because of course it did. And eventually the friend who took this picture became a kindergarten teacher, while I became a ridiculously overrated blogger. That's not just an opinion, it's a matter of fact, because Google over-emphasizes the importance of programmer content, relative to literally any other kind of content, when it computes its search rankings. And so, through the "magic" of Google, the first search result for my former photographer's name - and she was by this point a kindergarten teacher - was this picture on my Myspace page.

She emailed me like, "Hi! It's been a while. Can you take that picture down?"



And of course, the answer was no, because I hadn't used Myspace in years, and I didn't have any idea what my password was, and I didn't have the same email address any more, and I didn't even have the computer I had back when Myspace existed. Except it turned out that Myspace was still existing for some reason, and was maybe causing some headaches for my friend as well. I have to tell you, if you're worried that you might have accidentally fucked up your friend's career in a serious way, all because you thought it would be funny to strap a dildo to your face, it doesn't feel awesome.

(And by the way, I'm pretty sure she's a great teacher. You shouldn't have to worry that some silly thing you did as a young adult, or in your late teens, would still haunt you five to fifteen years later, but that's the Internet we built by accident.)

So I went hunting on Myspace for how to take a picture down for an account you forgot you had, and Myspace was like, "Dude, no problem! Just tell us where you lived when you had that account, and what your email address was, and what made-up bullshit answers you gave us for our security questions, since nobody in their right minds would ever provide accurate answers to those questions if they understood anything at all about the Internet!"



So that didn't go so well, either. I didn't know the answers to any of those questions. I didn't have the email address any more, and I had no idea what my old physical address was. I would have a hard time figuring out what my current address is. Probably, if I needed to know that, I might be able to find it in Gmail. That's certainly where I would turn first, because Google has eaten my ability to remember things and left me a semi-brainless husk, as most of you know, because it's done the same thing to you, and your friends, and your family.

Speak of the devil - around this time, Google started pressuring everybody in the fucking universe to sign up for Google Plus, Larry Page's desperate bid to turn Google into Facebook, because who on earth would ever be content to be one of the richest people in the history of creation, if Valleywag stopped paying attention to you for five whole minutes?

My reaction when Google's constantly like, "Hey Giles, you should join Google Plus!"



Since then, my photographer/teacher friend fortunately figured out a different way to get the image off Myspace, and I made it a rule to avoid Google Plus. Having had such a negative experience with Myspace, I took the position that any social network you join creates presence debt, like the technical debt incurred by legacy code - the nasty, counterproductive residue of a previous identity. So I was like, fuck Google Plus. I lasted for years without joining that horrible thing, but I finally capitulated this summer. I joined a company called Panda Strike, and a lot of us work remote (myself included), so we periodically gather via Google Hangouts to chat and convene as a group.

But just because I had consented to use Hangouts, that didn't mean I was going down without a fight.

When I "joined" Google Plus, I first opened up an Incognito window in Chrome. Then I made up a fake person with fake biographical attributes and joined as that person. Thereafter, whenever I saw a Google Hangouts link in IRC or email, I would first open up an Incognito window, then log into Google Plus "in disguise," and then copy/paste the Hangouts URL into the Incognito window's location textfield, and then - and only then - enter the actual Hangout.

This is, of course, too much fucking work. But at least it's work I've created for myself. Plenty of people who are willing to go along with Google's bullying approach to selling Google Plus still get nothing but trouble when they try to use Hangouts.









Protip: don't even tolerate this bullshit.

Imagine how amazing it would be if all you needed to join a live, ongoing video chat was a URL. No username, no password, no second-rate social network you've been strong-armed into joining (or pretending to join). Just a link. You click it, you're in the chat room, you're done.

Panda Strike has built this site. It's called GlideRoom, and it's Google Hangouts without the hangups, or the hassle, or indeed the shiny, happy dystopia.



Clicking "Get A Room" takes you to a chat room, whose URL is a unique hash. All you do to invite people to your chat room is send them the URL. You don't need to authorize them, authenticate them, invite them to a social network which has no other appealing features (and plenty of unappealing ones), or jump through any other ridiculous hoops.



We built this, of course, to scratch our own itch. We built this because URLs are an incredibly valuable form of user interface. And yes, we built it because Google Plus is so utterly bloody awful that we truly expect its absence to be a big plus for our product.

So check out Glideroom, and tweet at me or the team to let us know how you like it.

by Giles Bowkett (noreply@blogger.com) at November 05, 2014 03:39 PM

November 03, 2014

Greg Linden

Quick links

What has caught my attention recently:
  • Netflix says the value of its recommendations algorithms is $500M/year ([1])

  • Details on the internals of LinkedIn's recommender system ([1])

  • Fantastic list of some hard and interesting big data problems at Facebook ([1] [2])

  • Google Glass may target "'superhero vision', like seeing in the dark, or magnifying subtle motion or changes" ([1] [2])

  • A claim that Amazon's cloud revenue is $4.7B this year, supposedly x30 bigger than Microsoft ($156M) and x70 Google's ($66M) ([1])

  • "We have a 10 petabyte data warehouse on S3" ([1])

  • Google's Eric Schmidt says, "Our biggest search competitor is Amazon" ([1])

  • Apple was and still is almost entirely an iPhone company ([1])

  • Tablet sales are projected to be flat now, and the growth boom for tablets appears to be done ([1])

  • But, it's interesting that specialized, expensive, and often poorly done custom hardware is getting replaced with a cheap touchscreen tablet ([1])

  • So far, it doesn't look like Windows 10 is going to fix what was wrong with Windows 8 ([1])

  • What? "Microsoft loves Linux" ([1] [2])

  • Delivery startups are back: "Silicon Valley wants to save you from ever having to leave your couch. Will it work this time around?" ([1])

  • Despite the difficulty older adults have with tiny mobile keyboards, older adults and seniors don't use voice search much ([1])

  • Speculation that hardware to enable gesture control on mobile phones will be widespread on new phones next year ([1])

  • A claim that "solar will soon reach price parity with conventional electricity in well over half the nation: 36 states" ([1])

  • "HP’s Multi Jet Fusion printer can crank out objects 10 times faster than any machine that’s on the market today ... 3D print heads that can operate 10,000 nozzles at once, while tracking designs to a five-micron precision." ([1] [2])

  • Is biology about to be transformed by the use of many drones to gather lots of data? ([1] [2])

  • More evidence that some of the best innovations come from combining ideas from two very separate fields ([1])

  • "Every success in AI redefines it. But we haven't just been redefining what we mean by AI-we've been redefining what it means to be human [and intelligent]." ([1])

  • "China is merely regaining a title that it has held for much of recorded history" ([1])

  • Funny Dilbert comic on multitasking and checking e-mail too often ([1])

  • The Onion: "This already vanishing glimmer of pleasure is exactly what we've come to expect from Apple" ([1])

  • Great SMBC comic: "The humans aren't doing what the math says. The humans must be broken." ([1])

by Greg Linden (noreply@blogger.com) at November 03, 2014 07:52 PM

October 25, 2014

Greg Linden

At what point is an over-the-air TV antenna too long to be legal?

You can get over-the-air HDTV signals using an antenna. This antenna gets a better, stronger signal with less interference if it is direct line-of-sight and as near as possible to the broadcast towers. So, you might want an antenna that is up high or even some distance away to get the best signal.

But if you try to do this, you immediately run into a question: At what point does that antenna become too long to be legal or the signal from the antenna is transmitted in a way where it is no longer legal?

Let's say I put an antenna behind my TV hooked up with a wire. That's obviously legal and what many people currently do.

Let's say I put an antenna outside on top of a tree or my garage and run a wire inside. Still seems obviously legal.

Let's say I put an antenna on top of my roof. Still clearly fine.

Let's say I put it on my neighbor's roof and run a wire to my TV. Still ok?

Let's say I put the antenna on my neighbor's roof, but have the antenna connect to my WiFi network and transmit the signal using my local area network instead of using a direct wired cable connection. Still ok?

Let's say I put the antenna on my neighbor's roof, but have the antenna connect to my neighbor's WiFi network and transmit the signal over their WiFi, over the internet, then to my WiFi, instead of using a direct wired cable connection. Still ok?

Let's say I put my antenna on my neighbor's roof, but my neighbor won't do this for free. I have to pay a small amount of rent to my neighbor for the space on his roof used by my antenna. I also have the antenna connect to my neighbor's WiFi network and transmit its signal over their WiFi, over the internet, then to my WiFi, instead of using a direct wired cable connection. Still ok?

Let's say, like before, I put my antenna on my neighbor's roof, pay the neighbor rent for the space on his roof, use the internet to transmit the antenna's signal. But, this time, I buy the antenna from my neighbor at the beginning (and, like before, I own it now). Is that okay?

Let's say I put my antenna on my neighbor's roof, pay the neighbor rent for the space on his roof, use the internet to transmit the antenna's signal, but now I rent or lease the antenna from my neighbor. Still ok? If this is not ok, which part is not ok? Is it suddenly ok if I replace the internet connection with a direct microwave relay or hardwired connection?

Let's say I do all of the last one, but use a neighbor's roof three houses away. Still ok?

Let's say I do all of the last one, but use a roof on a building five blocks away. Still ok?

Let's say I rent an antenna on top of a skyscraper in downtown Seattle and have the signal sent to me over the internet. Not ok?

The Supreme Court recently ruled Aereo is illegal. Aereo put small antennas in a building and rented them to people. The only thing they did beyond the last thing above is time-shifting, so they would not necessary send the signal from the antenna immediately, but instead store it, and only transmit it when demanded.

You might think it's the time shifting that's the problem, but that didn't seem to be what the Supreme Court said. Rather, they said the intent of the 1976 amendments to US copyright law prohibit community antennas (which is one antenna that sends its signal to multiple homes), labelling those a "public performance". They said Aereo's system was similar in function to a community antenna, despite actually having multiple antennas, and violated the intent of the 1976 law.

So, the question is, where is the line? Where does my antenna become too distant, transmit using the wrong methods, or involve too many payments to third parties in the operation of the antenna that it becomes illegal? Can it not be longer than X meters? Not transmit its signal in particular ways? Not require rent for the equipment or space on which the antenna sits? Not store the signal at the antenna and transmit it only on demand? What is the line?

I think this question is interesting for two reasons. First, as an individual, I would love to have a personal-use over-the-air HDTV antenna that gets a much better reception than the obstructed and inefficient placement behind my TV, but I don't know at what point it becomes illegal for me to place an antenna far away from the TV. Second, I suspect many others would like a better signal from their HDTV antenna too, and I'd love to see a startup (or any group) that helped people set up these antennas, but it is very unclear what it might be legal for a startup to do.

Thoughts?

by Greg Linden (noreply@blogger.com) at October 25, 2014 08:40 AM

October 24, 2014

Greg Linden

Why can't I buy a solar panel somewhere else in the US and get a credit for the electricity from it?

Seattle City Light has a clever project where, instead of installing solar panels on your house where they might be obscured by trees or buildings, you can buy into a solar panel installation on top of a building in a more efficient location and get a credit for the electricity generated on your electric bill.

Why stop there? Why can't I buy a solar panel in a very different location and get the electricity from it?

Phoenix, Arizona has about twice the solar energy efficiency of Seattle. Why can't I buy a solar panel and enjoy the electricity credit from that solar panel when it is installed in a nice sunny spot in the Southwest?

This doesn't require shipping the actual electricity to your home. Instead, you fund an installation of solar panels on top of a building in an area of the US with high solar energy efficiency, then get a credit for that electricity on your monthly electricity bill.

I suppose, at some boring financing level, this starts to resemble a corporate bond, with an initial payment yielding a stream of payments over time, but people wouldn't see it that way. The attraction would be installing solar panels and getting a credit on your energy bill without installing solar panels on your own home. Perhaps the firm arranging the installations and working out the deals with local utilities could be treating the entire thing as the equivalent of marketing bonds to people who like solar energy, but the attraction to people is that visceral appeal of a near $0 electricity bill they see every month from the solar panels they feel like they own and installed.

Even with the overhead pulled out by the company selling this and arranging deals with local utilities so this all appears on your local electricity bill, the credit on your electricity bill still should be much higher than you could possibly get installing panels on your own home with all its obstructions and cloudy weather. Solar generation in an ideal location in the US easily can generate twice as much power as what is available locally, on your rooftop.

So, why hasn't someone done this? Why can't I buy solar panels and have them installed not on my own home, but in some much better spot?

by Greg Linden (noreply@blogger.com) at October 24, 2014 07:48 AM

October 16, 2014

Blue Sky on Mars

Inertia

Traffic jams.

They're hilarious, really. Humans tucked neatly single-file in their giant aluminum boxes, waiting patiently for three lights to alternate colors long enough to transcend into the heaven that is the next block closer to their destination.

Ever hop in a cab or a bus and get stuck in a traffic jam? How many times have you decided fuck it, I'm just going to get out and walk — even if the walk is going to take you longer than the wait in traffic? And yet you do it anyway... because you're going places, dammit. You’re important. Your hair looks fantastic today. You’re not the type to tepidly stand still in one place.

Inertia is a powerful concept. When you're moving, you're moving. When you're stuck, well, you're stuck. It requires more energy to get the ball rolling.

Shipping is contagious

Startups have inertia. If all your coworkers are building great things, you feel a compulsion to push forward so you can be proud about shipping your piece of the puzzle, too. This is why when things are good at a company, they tend to be great. Sure, you’re facing some problems, but it feels like you’re all in this river together and you’re flying downstream at a pace that can’t ever stop.

When things are bad at a company, it feels horrific. You’re swimming against the current instead of with it. Even trivial stumbling blocks can loom as insurmountable obstacles when you don’t have that flow going.

It’s because of inertia: it’s hard to get that ball rolling again if you’re stopped.

Inertia is gravitational

A huge part of this is pace. There’s a lot to be said about moving fast and breaking nothing, specifically around testing hypotheses and evaluating the market, but the most important part of the philosophy is that you’re continually making visible progress.

As usual, if it's worth saying, @rands has probably already said it:

I think of boredom as a clock. Every second that someone on my team is bored, a second passes on this clock. After some aggregated amount of seconds that varies for every person, they look at the time, throw up their arms, and quit.

— Michael Lopp, Bored People Quit

Good people gravitate towards this progress. It keeps the gig interesting, since everything’s constantly evolving and you can see the effects of that. There’s certainly a company in which your people find that inertia... whether it ends up being your company or not is the real question.

Swinging for the fences

As much as constant incremental improvement is important, so is taking the big risks. To borrow a baseball analogy: walks will best improve your team’s chances of winning, but homers will keep the fans coming back for more. You need a balance of both.

People just don’t get excited about the small stuff. I mean, they’re good, but they don’t rally your base. It doesn’t feel like you’re adding inertia… it just feels like you’re maintaining.

Good people don’t stay for maintenance.

Taking on new risk keeps things fresh. It lets your people experiment with new approaches, and with different and complex problems. You can even fail, and that's great: you can learn from that failure and try again. Taking on big risk is just a way to prime the pump to get the current flowing again.

Bored people quit, unremarkable companies maintain, and good companies keep their inertia.

October 16, 2014 12:00 AM

October 14, 2014

Giles Bowkett

How Much Of "Software Engineering" Is Engineering?

When you build some new piece of technology, you're (arguably) doing engineering. But once you release it into the big wide world, its path of adoption is organic, and sometimes full of surprises.

Quoting Kevin Kelly's simultaneously awesome and awful book What Technology Wants, which I reviewed a couple days ago:

Thomas Edison believed his phonograph would be used primarily to record the last-minute bequests of the dying. The radio was funded by early backers who believed it would be the ideal device for delivering sermons to rural farmers. Viagra was clinically tested as a drug for heart disease. The internet was invented as a disaster-proof communications backup...technologies don't know what they want to be when they grow up.

When a new technology migrates from its intended use case, and thrives instead on an unintended use case, you have something like the runaway successes of invasive species.

In programming, whether you say "best tool for the job" or advocate your favorite One True Language™, you have an astounding number of different languages and frameworks available to build any given application, and their distribution is not uniform. Some solutions spread like wildfire, while others occupy smaller niches within smaller ecosystems.

In this way, evaluating the merits of different tools is a bit like being an exobiologist on a strange planet made of code. Why did the Ruby strain of Smalltalk proliferate, while the IBM strain died out? Oh, because the Ruby strain could thrive in the Unix ecosystem, while the IBM strain was isolated and confined within a much smaller habitat.

However, sometimes understanding technology is much more a combination of archaeology and linguistics.

Go into your shell and type man 7 re_format.

DESCRIPTION
Regular expressions (``REs''), as defined in IEEE Std 1003.2 (``POSIX.2''), come in two forms: modern REs (roughly those of egrep(1); 1003.2 calls these ``extended'' REs) and obsolete REs (roughly those of ed(1); 1003.2 ``basic'' REs). Obsolete REs mostly exist for backward compatibility in some old programs; they will be discussed at the end.


This man page, found on every OS X machine, every modern Linux server, and probably every iOS or Android device, describes the "modern" regular expressions format, standardized in 1988 and first introduced in 1979. "Modern" regular expressions are not modern at all. Similarly, "obsolete" regular expressions are not obsolete, either; staggering numbers of people use them every day in the context of the find and grep commands, for instance.

To truly use regular expressions well, you should understand this; understand how these regular expressions formats evolved into sed and awk; understand how Perl was developed to replace sed and awk but instead became a very popular web programming language in the late 1990s; and further understand that because nearly every programming language creator acquired Perl experience during that time, nearly every genuinely modern regular expressions format today is based on the format from Perl 5.

Human languages change over time, adapting to new usages and stylings with comparative grace. Computer languages can only change through formal processes, making their specifics oddly immortal (and that includes their specific mistakes). But the evolution of regular expressions formats looks a great deal like the evolution which starts with Latin and ends with languages like Italian, Romanian, and Spanish - if you have the patience to dig up the evidence.

So far, I have software engineering including the following surprising skills:
  • Exobiology
  • Archaeology
  • Linguistics
There's more. There's so much more. For example, you need to extract so much information from the social graph - who uses what technologies, and what tribes a language's "community" breaks down to - that it would be easy to add anthropology to the list. You can find great insights on this in presentations from Francis Hwang and Sarah Mei.

by Giles Bowkett (noreply@blogger.com) at October 14, 2014 03:00 PM

October 12, 2014

Giles Bowkett

Kevin Kelly "What Technology Wants"

Kevin Kelly's What Technology Wants advocates the idea that technology is an adjunct to evolution, and an extension of it, so much so that you can consider it a kingdom of life, in the sense that biologists use the term. Mr. Kelly draws fascinating parallels between convergent evolution and multiple discovery, and brings a ton of very interesting background material to support his argument. However, I don't believe he understands all the background material, and I almost feel as if he's persuading me despite his argument, rather than persuading me by making his argument.

So I recommend this book, but with a hefty stack of caveats. Mr. Kelly veers back and forth between revolutionary truths and "not even wrong" status so rapidly and constantly that you might as well consider him to be a kind of oscillator, producing some sort of waveform defined by his trajectory between these two extremes. The tone of this oscillator is messianic, prophetic, frequently delusional, but also frequently right. The insights are brilliant but the logic is often terrible. It's a combination which can make your head spin.

The author seems to either consider substantiating his arguments beneath him, or perhaps is simply not familiar with the idea of substantiating an argument in the first place. There are plenty of places where the entire argument hinges on things like "somebody says XYZ, and it might be true." No investigation of what it might mean instead if the person in question were mistaken. This is a book which will show you a graph with a line which wobbles so much it looks like a sine wave, and literally refer to that wobbling line as an "unwavering" trend.

He also refers to "the optimism of our age," in a book written in 2010, two years after the start of the worst economic crisis since the Great Depression. The big weakness in my oscillator metaphor, earlier, is that it is an enormous understatement to call the author tone-deaf.



Then again, perhaps he means the last fifty years, or the last hundred, or the last five hundred. He doesn't really clarify which age he's referring to, or in what sense it's optimistic. Or maybe when he says "our age," the implied "us" is not "humanity" or "Americans," but "Californians who work in technology." Mr. Kelly's very much part of the California tech world. He founded Wired, and I actually pitched him on writing a brief bit of commentary in 1995, which Wired published, and that was easily the coolest thing that happened to me in 1995.

Maybe because of that, I'm enjoying this book despite its flaws. It makes a terrific backdrop to Charles Stross's Accelerando. It's full of amazing stuff which is arguably true, very important if true, and certainly worth thinking about, either way. I loved Out Of Control, a book Mr. Kelly wrote twenty years ago about a similar topic, although of course I'm now wondering whether I was less discerning in those days, or if Mr. Kelly's writing went downhill. Take it with a grain of salt, but What Technology Wants is still worth reading.

Returning again to the oscillator metaphor, if a person's writing about big ideas, but they oscillate between revolutionary truths and "not even wrong" status whenever they get down to the nitty-gritty details, then the big ideas they describe probably overlap the truth about half the time. The question is which half of this book ultimately turns out to be correct, and it's a very interesting question.

by Giles Bowkett (noreply@blogger.com) at October 12, 2014 08:04 PM

Shell Scripting: Also Essential For Animators

I'm taking classes in the motion graphics and animation software Adobe After Effects. It needs a cache, and I've put its cache on an external hard drive, to avoid wasting laptop drive space. But I sometimes forget to plug that hard drive in, with the very annoying result that After Effects "helpfully" informs me that it's using a new cache location. I then immediately quit After Effects, plug in the hard drive, re-launch the software, and re-supply the correct cache location in the application's preferences.

Obviously, the solution was to remove After Effects from the OS X Dock, which is a crime against user experience anyway, and replace the dock's launcher icon with a shell script. The shell script only launches After Effects if the relevant hard drive is present and accounted for.

("Vanniman Time Machine" is the name of the hard drive, because reasons.)

by Giles Bowkett (noreply@blogger.com) at October 12, 2014 02:10 PM

October 09, 2014

Giles Bowkett

October 07, 2014

Decyphering Glyph

Thank You Lennart

I (along with about 6 million other people, according to the little statistics widget alongside it) just saw this rather heartbreaking post from Lennart Poettering.

I have not had much occasion to interact with Lennart personally, and (like many people) I have experienced bugs in the software he has written. I have been frustrated by those bugs. I may not have always been charitable in my descriptions of his software. I have, however, always assumed good faith on his part and been happy that he continues making valuable contributions to the free software ecosystem. I haven’t felt the need to make that clear in the past because I thought it was understood.

Apparently, not only is it not understood, there is active hostility directed against his participation. There is constant, aggressive, bad-faith attempts to get him to stop working on the important problems he is working on.

So, Lennart,

Thank you for your work on GNOME, for working on the problem of getting free software into the hands of normal people.

Thank you for furthering the cause of free software by creating PulseAudio, so that we can at least attempt to allow users to play sound on their Linux computers from multiple applications simultaneously without writing tedious configuration files.

Thank you for your work on SystemD, attempting to bring modern system-startup and service-management practices to the widest possible free software audience.

Thank you, Lennart, for putting up with all these vile personal attacks while you have done all of these things. I know you could have walked away; I’m sure that at times, you wanted to. Thank you for staying anyway and continuing to do the good work that you’ve done.

While the abuse is what prompted me to write this, I should emphasize that my appreciation is real. As a long-time user of Linux both on the desktop and in the cloud, I know that my life has been made materially better by Lennart’s work.

This shouldn’t be read as an endorsement of any specific technical position that Mr. Poettering holds. The point is that it doesn’t have to be: this isn’t about whether he’s right or not, it’s about whether we can have the discussion about whether he’s right in a calm, civil, technical manner. In fact I don’t agree with all of his technical choices, but I’m not going to opine about that here, because he’s putting in the work and I’m not, and he’s fighting many battles for software freedom (most of them against our so-called “allies”) that I haven’t been involved in.

The fact that he felt the need to write an article on the hideous state of the free software community is as sad as it is predictable. As a guest on a podcast recently, I praised the Linux community’s technical achievements while critiquing its poisonous culture. Now I wonder if “critiquing” is strong enough; I wonder if I should have given any praise at all. We should all condemn this kind of bilious ad-hominem persecution.

Today I am saying “thank you” to Lennart because the toxicity in our communities is not a vague, impersonal force that we can discuss academically. It is directed at specific individuals, in an attempt to curtail their participation. We need to show those targetted people, regardless of how high-profile they are, or whether they’re paid for their work or not, that they are not alone, that they have our gratitude. It is bullying, pure and simple, and we should not allow it to stand.

Software is made out of feelings. If we intend to have any more free software, we need to respect and honor those feelings, and, frankly speaking, stop trying to make the people who give us that software feel like shit.

by Glyph at October 07, 2014 08:32 AM

October 05, 2014

Giles Bowkett

Backstory For An Anime Series

Many think there is only one Kanye. They are mistaken. There is a Kanye East. There are Kanyes North and South. On the day which the prophets have spoken of, the world will be ready, and the lost Kanyes of legend will return. A great evil will threaten the realm, and the four Kanyes will merge as one to form a Kanye Voltron, and fight fiercely and with great valor for the future of all humanity.

by Giles Bowkett (noreply@blogger.com) at October 05, 2014 12:23 PM

October 03, 2014

Giles Bowkett

One Way To Understand Programming Languages

I'm learning to play the drums, and I got a good DVD from Amazon. It starts off with a rant about drum technique.

The instructor mentions the old rule of thumb that you're best to avoid conversations about religion and politics, and says that he thinks drum technique should be added to the list. He says that during the DVD, he'll tell you that certain moves are the wrong moves to make, but that any time he says that, it really means that the given move is the wrong move to make in the context of the technique he's teaching.

He then goes on to give credit to drummers who play using techniques that are different from his, and to say that it's your job as a drummer to take every technique with a grain of salt and disavow the whole idea of regarding any particular move as wrong. Yet it's also your job as a student of any particular technique to interpret that technique strictly and exactly, if you want to learn it well enough to use it. So when you're a drummer, the word "wrong" should be meaningless, yet when you're a student, it should be very important.

Programming has this tension also. If you're a good programmer, you have to be capable of understanding both One True Way fanatacism and "right tool for the job" indifference. And you have to be able to use any particular right tool for a job in that particular's tool One True Way (or choose wisely between the options that it offers you).

by Giles Bowkett (noreply@blogger.com) at October 03, 2014 10:12 AM

Blue Sky on Mars

Move Fast and Break Nothing

Move Fast and Break Nothing body { background-color: rgb(15,14,13); background-repeat: repeat-x; font-family: "Cabin", sans-serif; color: #C2EBFF; margin: 0; padding: 0; } a { color: rgb(255, 47, 146); } a:hover { color: rgb(255, 138, 216); } .container { width: 900px; margin: 0 auto; } .header { background-color: rgba(255, 47, 146, .25); } .header .name { font-size: 24px; text-decoration: none; color: #fff; } .header .moar { margin-top: -20px; float: right; text-decoration: none; } .lrotate { transform: rotate(-1deg); -webkit-transform: rotate(-1deg); } .rrotate { padding-top: 10px; transform: rotate(1deg); -webkit-transform: rotate(1deg); } .clear { clear: both; } .full { width: 100%; height: 300px; background-color: #111; background-size: cover; } .full h2 { padding-top: 200px; text-align: right; } h1 { margin: 0; padding-top: 150px; font-size: 68px; letter-spacing: -4px; color: rgb(255, 47, 146) } .byline { margin-top: 7px; margin-left: 22px; font-size: 22px; color: #fff; letter-spacing: -1px; } .byline a { text-decoration: none; } h2 { margin-top: 30px; padding: 0; font-size: 48px; color: rgb(255, 47, 146); } h3 { color: rgb(255, 47, 146); margin-top: 100px; font-size: 30px; } .content { padding-bottom: 200px; } .content p { font-size: 22px; margin-top: 40px; font-family: "Avenir Next"; } .content img.lrotate { float: left; margin: 30px 40px 20px 0; } .content img.rrotate { float: right; margin: 30px 0 20px 40px; } .content blockquote { font-size: 18px; color: #fff; background-color: rgba(255, 255, 255, .05); width: 350px; padding: 20px; margin: 25px; } .content li { font-size: 18px; color: rgba(255, 255, 255, .8); line-height: 30px; } .content code { color: #fff; background-color: rgba(255, 255, 255, .1); padding: 0 4px; } .background { margin-top: 50px; padding: 5px 25px; font-size: 20px; background-color: rgba(255, 255, 255, .05); line-height: 30px; } .background a { text-decoration: none; } .background h2 { margin: 0 0 10px -5px; } .slide { width: 400px; } .visual { margin-top: 50px; background-color: rgb(36,25,13); font-size: 24px; } .visual .toggle-button { margin: 0 5px; padding: 15px; border: 1px solid #fff; border-radius: 4px; color: rgba(255,255,255,.75); cursor: pointer; text-decoration: none; } .visual .toggle-button:hover { color: #fff; background-color: rgba(255,255,255,.5); } .visual .toggle { padding: 125px 0; text-align: center; } .visual #slides, .visual #video { margin-bottom: -6px; display: none; } .visual .chevron { font-size: 40px; font-weight: bold; cursor: pointer; text-decoration: none; } .visual .chevron.left { margin-left: -25px; float: left; } .visual .chevron.right { margin-right: -25px; float: right; } .highlight { padding: 30px; } .highlight code { color: #111; } .section-fast { background-color: rgb(48, 42, 46); } .section-process { background-color: rgb(49, 0, 25); } .section-talk { background-color: rgb(39, 43, 36); } .section-closing { color: rgb(39, 43, 36); background-color: rgb(232, 250, 226); } .section-footer { text-align: center; padding: 25px; } .section-footer p { margin: 0; padding: 0; font-size: 16px; } .section-footer a { text-decoration: none; font-weight: bold; } @media screen and (max-width: 1000px) { body { overflow-x: hidden; } .container { width: 100%; } .container p, .container h2 { padding: 0 20px; } .rrotate, .lrotate { } h1 { margin-left: 20px; font-size: 50px; } blockquote { width: 90% !important; padding: 0; margin: 0; } .highlight { margin: 0; } .content { overflow-x: hidden; } .slide { width: 100%; } .rrotate.slide, .lrotate.slide { transform: none; -webkit-transform: none; } }

move fast & break nothing

I gave the closing keynote at the Future of Web Apps in London this October, 2014. This is that talk.

The slides and the full video of the talk are directly below. If you're interested in a text accompaniment, read on for the high-level overview.

moving fast and breaking things

Let's start with the classic Facebook quote, Move fast and break things. Facebook's used that for years: it's a philosophy of trying out new ideas quickly so you can see if they survive in the marketplace. If they do, refine them; if they don't, throw it away without blowing a lot of time on development.

Breaking existing functionality is acceptable... it's a sign that you're pushing the boundaries. Product comes first.

Facebook was known for this motto, but in early 2014 they changed it to Move fast with stablity, among other variations on the theme. They caught a lot of flak from the tech industry for this: something something "they're running away from their true hacker roots" something something. I think that's horseshit. Companies need to change and evolve. The challenges Facebook's facing today aren't the same challenges they're facing ten years ago. A company that's not changing is probably as innovative as tepid bathwater.

Around the time I started thinking about this talk, my friend sent me an IM:

Do you know why kittens and puppies are so cute?

It's so we don't fucking eat them.

Maybe it was the wine I was drinking or the mass quantity of Cheetos® Puffs™ I was consuming, but what she said both amused me and made me think about designing unintended effects inside of a company. A bit of an oxymoron, perhaps, but I think the best way to get things done in a company isn't to bash it over your employee's heads every few hours, but to instead build an environment that helps foster those effects. Kittens don't wear signs on them that explicitly exclaim "DON'T EAT ME PLS", but perhaps their cuteness help lead us towards being kitten-carnivorous-averse. Likewise, telling your employees "DON'T BREAK SHIT" might not be the only approach to take.

I work at GitHub, so I'm not privvy to what the culture is like at Facebook, but I can take a pretty obvious guess as of the external manifestations of their new motto: it means they break fewer APIs on their platform. But the motto is certainly more inward-facing rather than external-facing. What type of culture does that make for? Can we still move quickly? Are there parts of the product we can still break? Are there things we absolutely can't break? Can we build product in a safer manner?

This talk explores those questions. Specifically I break my talk into three parts: code, internal process in your development team and company, and the talk, discussion, and communication surrounding your process.

code

I think move fast and break things is fine for many features. But the first step is identifying what you cannot break. These are things like billing code (as much as I'd like to, I probably shouldn't accidentally charge you a million dollars and then email you later with an "oops, sorry!"), upgrades (hardware or software upgrades can always be really dicey to perform), and data migrations (it's usually much harder to rollback data changes).

The last two years we've been upgrading GitHub's permissions code to be faster, safer, cleaner, and generally better. It's a scary process, though. This is an absolute, 100% can't-ever-break use case. The private repository you pay us for can't suddenly be flipped open to the entire internet because of a bug in our deployed code. 0.02% failure isn't an option; 0% failures needs to be mandatory.

But we like to move fast. We love to deploy new code incrementally hundreds of times a day. And there's good reason for that: it's safer overall. Incremental deploys are easier to understand and fix than one gigantic deploy once a year. But it lends itself to those small bugs, which, in this permissions case, is unacceptable.

So tests are good to have. This is unsurprising to say in this day and age; everyone generally understands now that testing and continuous integration is absolutely critical to software development. But that's not what's at stake here. You can have the best, most comprehensive test suite in the world, but tests are still different from production.

There are a lot of reasons for this. One is data: you may have flipped some bit (accidentally or intentionally) for some tables for two weeks back in December of 2010, and you've all but forgotten about that today. Or your cognitive model of the system may be idealized. We noticed that while doing our permissions overhaul. We'd have a nice, neat table of all the permissions of users, organizations, teams, public and private repositories, and forks, but we'd notice that the neat table would fall down on very arcane edge cases once we looked at production data.

And that's the rub: you need your tests to pass, of course, but you also need to verify that you don't change production behavior. Think of it as another test suite: for better or worse, the behavior deployed now is the state of the system from your users' perspective. You can then either fix the behavior or update your tests; just make sure you don't break the user experience.

Parallel code paths

One of the approaches we've taken is through the use of parallel code paths.

What happens is this: a request will come in as usual and run the existing (old) code. At the same time (or just right after it executes), we'll also run the new code that we think will be better/faster/harder/stronger (pick one). Once all that's done, return whatever the existing (old) code returns. So, from the user's perspective, nothing has changed. They don't see the effects of the new code at all.

There's some caveats, of course. In this case, we're typically performing read-only operations. If we're doing writes, it takes a bit more smarts to either write your code to make sure it can run both branches of code safely, or you can rollback the effects of the new code, or the new code is a no-op or otherwise goes to a different place entirely. Twitter, for example, has a very service-oriented architecture, so if they're spinning up a new service they redirect traffic and dual-write to the new service so they can measure performance, accuracy, catch bugs, and then throw away the redundant data until they're ready to switch over all traffic for real.

We wrote a Ruby library named Science to help us out with this. You can check it out and run it yourself in the github/dat-science repository. The general idea would be to run it like this:

  science "my-cool-new-change" do |e|
    e.control   { user.existing_slow_method }
    e.candidate { user.new_code_we_think_is_great }
  end

It's just like when you Did Science™ in the lab back in school growing up: you have a control, which is your existing code, and a candidate, which is the new code you want to introduce. The science block makes sure both are run appropriately. The real power happens with what you can do after the code runs, though.

We use Graphite literally all over the company. If you haven't seen Coda Hale's Metrics, Metrics Everywhere talk, do yourself a favor and give it a watch. Graphing behavior of your application gives you a ton of insight into your entire system.

Science (and its sister library, github/dat-analysis) can generate a graph of the number of times the code was run (the top blue bar to the left) and compare it to the number of mismatches between the control and the candidate (in red, on the bottom). In this case you see a downward trend: the developer saw that their initial deploy might have missed a couple use cases, and over subsequent deploys and fixes the mismatches decreased to near-zero, meaning that the new code is matching production's behavior in almost all cases.

What's more, we can analyze performance, too. We can look at the average duration of the two code blocks and confirm if the new code we're running is faster, but we can also break down requests by percentile. In the slide to the right, we're looking at the 75th and 99th percentile, i.e. the slowest of requests. In this particular case, our candidate code is actually quite a bit slower than the control: perhaps this is acceptable given the base case, or maybe this should be huge red sirens that the code's not ready to deploy to everyone yet... it depends on the code.

All of this gives you evidence to prove the safety of your code before you deploy it to your entire userbase. Sometimes we'll run these experiments for weeks or months as we widdle down all the — sometimes tricky — edge cases. All the while, we can deploy quickly and iteratively with a pace we've grown accustomed to, even on dicey code. It's a really nice balance of speed and safety.

build into your existing process

Something else I've been thinking a lot about lately is how your approach to building product is structured.

Typically process is added to a company vertically. For example: say your team's been having some problems with code quality. Too many bugs have been slipping into production. What a bummer. One way to address that is to add more process to your process. Maybe you want your lead developers to review every line of code before it gets merged. Maybe you want to add a layer of human testing before deploying to production. Maybe you want a code style audit to give you some assurance of new code maintainability.

These are all fine approaches, in some sense. It's not problematic to want to achieve the goal of clean code; far from it, in fact. But I think this vertical layering of process is really what can get aggravating or just straight-up confusing if you have to deal with it day in, day out.

I think there's something to be said for scaling the breadth of your process. It's an important distinction. By limiting the number of layers of process, it becomes simpler to explain and conceptually understand (particularly for new employees). "Just check continuous integration" is easier to remember than "push your code, ping the lead developers, ping the human testing team, and kick off a code standards audit".

We've been doing more of this lateral process scaling at GitHub informally, but I think there's more to it than even we initially noticed. Since continuous integration is so critical for us, people have been adding more tests that aren't necessarily tests in the classic sense of the word. Instead of "will this code break the application", our tests are more and more measuring "will this code be maintainable and more resilent towards errors in the future".

For example, here are a few tests we've added that don't necessarily have user-facing impact but are considered breaking the build if they go red:

  • Removing a CSS declaration without removing the associated class attribute in the HTML
  • ...and vice versa: removing a class attribute without cleaning up the CSS
  • Adding an <img> tag that's not on our CDN, for performance, security, and scaling reasons
  • Invalid SCSS or CoffeeScript (we use SCSS-Lint and CoffeeLint)

None of these are world-ending problems: an unspecified HTML class doesn't really hurt you or your users. But from a code quality and maintainability perspective, yeah, it's a big deal in the long term. Instead of having everyone focus on spotting these during code review, why not just shove it in CI and let computers handle the hard stuff? It frees our coworkers up from gruntwork and lets them focus on what really matters.

Incidentally, some of these are super helpful during refactoring. Yesterday I shipped some new dashboards on github.com, so today I removed the thousands of lines of code from the old dashboard code. I could remove the code in bulk, see which tests fail, and then go in and pretty carelessly remove the now-unused CSS. Made it much, much quicker to do because I didn't have to worry about the gruntwork.

And that's what you want. You want your coworkers to think less about bullshit that doesn't matter and spend more consideration on things that do. Think about consolidating your process. Instead of layers, ask if you can merge them into one meeting. Or one code review. Or automate the need away entirely. The layers of process are what get you.

process

In bigger organizations, the number of people that need to be involved in a product launch grows dramatically. From the designers and developers who actually build it, to the marketing team that tells people about it, to the ops team who scales it, to the lawyers that legalize it™... there's a lot of chefs in the kitchen. If you're releasing anything that a lot of people will see, there's a lot you need to do.

Coordinating that can be tricky.

Apple's an interesting company to take a look at. Over time, a few interesting tidbits have spilled out of Cupertino. The Apple New Product Process (ANPP) is, at its core, a simple checklist. It goes into great detail about the process of releasing a product, from beginning to end, from who's responsible to who needs to be looped into the process before it goes live.

The ANPP tends to be at the very high-level of the company (think Tim Cook-level of things), but this type of approach sinks deeper down into individual small teams. Even before a team starts working on something, they might make a checklist to prep for it: do they have appropriate access to development and staging servers, do they have the correct people on the team, and so on. And even though they manage these processes in custom-built software, what it is at its core is simple: it's a checklist. When you're done with something, you check it off the list. It's easy to collaborate on, and it's easy to understand.

Think back to every single sci-fi movie you've ever watched. When they're about to launch the rocket into space, there's a lot of "Flip MAIN SERIAL BUS A to on". And then the dutiful response: "Roger, MAIN SERIAL BUS A is on". You don't see many responses of, "uh, Houston, I think I'm more happier when MAIN SERIAL BUS A is at like, 43% because SECONDARY SERIAL BUS B is kind of a jerk sometimes and I don't trust goddamn serial busses what the hell is a serial bus anyway yo Houston hook a brother up with a serial limo instead".

And there's a reason for that: checklists remove ambiguity. All the debate happens before something gets added to the checklist... not at the end. That means when you're about to launch your product — or go into space — you should be worrying less about the implementation and rely upon the process more instead. Launches are stressful enough as-is.

ownership

Something else that becomes increasingly important as your organization grows is that of code ownership. If the goal is to have clean, relatively bug-free code, then your process should help foster an environment of responsibility and ownership of your piece of the codebase.

If you break it, you should fix it.

At GitHub, we try to make that connection pretty explicit. If you're deploying your code and your code generates a lot of errors, our open source chatroom robot — Hubot — will notice and will mention you in chat with a friendly message of "hey, you were the last person to deploy and something is breaking- can you take a look at it?". This reiterates the idea that you're responsible for the code that you put out. That's good because, as it turns out, the people who wrote the code are typically the people who can most easily fix it. Beyond that, forcing your coworkers to always clean up your mess is going to really suck over time (for them).

There are plenty of ways to keep people responsible. Google, for example, uses OWNERS files in Chrome. This is a way of making explicit the ownership of a file or entire directories of the project. The format of an actual OWNERS file can be really simple — shout out to simple systems like flat files and checklists — but they serve two really great purposes:

  • They enforce quality. If you're an owner of an area of code, any new contribution to your code requires your signoff. Since you are in a somewhat elevated position of responsibility, it's on you to fight to not allow potentially buggy code into your area.
  • It encourages mentorship. Particularly in open source projects like Chromium, it can be intimidating to get started with your first contribution. OWNERS files make it explicit about who you might want to ask about your code or even about the high-level discussion before you get started.

You can tie your own systems together closer, too. In Haystack, our internal error tracking service at GitHub, we have pretty deep hooks into our code itself. In a controller, for example, we might have code that looks like this:

class BranchesController
  areas_of_reponsibility :git
end

This marks this particular file as being the responsibility of the @github/git team, the team that handles Git-related data and infrastructure. So, when we see a graph in Haystack like the one to the right, we can see that there's something breaking in a particular page, and we can quickly see which teams are responsible for the code that's breaking, since Haystack knows to look into the file with the error and bubble up these areas of responsibility. From here, it's a one-click operation to open an issue on our bug tracker about it, mentioning the responsible team in it so they can fix it.

Look: bugs do happen. Even if you move fast and break nothing, well, you're still bound to break something at some point. Having a culture of responsibility around your code helps you address those bugs quickly, in an organized manner.

talking & communicating

I've given a lot of talks and written a lot of blog posts about software development and teams and organizations. Probably one way to sum them all up is: more communication. I think companies function better by being more transparent, and if you build your environment correctly, you can end up with better code, a good remote work culture, and happier employees.

But god, more communication means a ton of shit. Emails. Notifications. Text messages. IMs. Videos. Meetings.

If everyone is involved with everything... does anyone really have enough time to actually do anything?

Having more communication is good. Improving your communication is even better.

be mindful of your coworker's time

It's easy to feel like you deserve the time of your coworkers. In some sense, you do: you're trying to improve some aspect of the company, and if your coworker can help out with that, then the whole company is better off. But every interaction comes with a cost: your coworker's time. This is dramatically more important in creative and problem solving fields like computer science, where being in the zone can mean the difference between a really productive day and a day where every line of code is a struggle to write. Getting pulled out of the zone can be jarring, and getting back into that mental mindset can take a frustratingly long time.

This goes doubly so for companies with remote workers. It's easy to notify a coworker through chat or text message or IM that you need their help with something. Maybe a server went down, or you're having a tough problem with a bug in code you're unfamiliar with. If you're a global company, timezones can become a factor, too. I was talking to a coworker about this, and after enough days of being on-call, she came up with a hilarious idea that I love:

  1. You find you need help with something.
  2. You page someone on your team for help.
  3. They're sleeping. Or out with their kids. Or any level of "enjoying their life".
  4. They check their message and, in doing so, their phone takes a selfie of them and pastes it into the chat room.
  5. You suddenly feel worse.

We haven't implemented this yet (and who knows if we will), but it's a pretty rad thought experiment. If you could see the impact your actions on your coworker's life, would it change your behavior? Can you build something into your process or your tools that might help with this? It's interesting to think about.

I think this is part of a greater discussion on empathy. And empathy comes in part from seeing real pain. This is why many suggest that developers handle some support threads. A dedicated support team is great, but until you're actually faced with problems up-close, it's easy to avoid these pain points.

institutional teaching

We have a responsibility to be teachers — that this should be a central part of [our] jobs... it's just logic that some day we won't be here.

— Ed Catmull, co-founder of Pixar, Creativity, Inc.

I really like this quote for a couple reasons. For one, this can be meant literally: we're all going to fucking die. Bummer, right? Them's the breaks, kid.

But it also means that people move around. Sometimes people will quit or get fired from the company, and sometimes it just means people moving around the company. The common denominator is that our presence is merely temporary, which means we're obligated, in part, to spreading the knowledge we have across the company. This is great for your bottom line, of course, but it's also just a good thing to do. Teaching people around you how to progress in their careers and being a resource for their own growth is a very privileged position to be in, and one we shouldn't take lightly.

So how do we share knowledge without being lame? I'm not going to lie: part of the reason I'm working now is because I don't have to go to school anymore. Classes are so dulllllllllll. So the last thing I want to have to deal with is some formal, stuffy process that ultimately doesn't even serve as a good foundation to teach anyone anything.

Something that's grown out of how we work is a concept we call "ChatOps". @jnewland has a really great talk about the nitty-gritty of ChatOps at GitHub, but in short: it's a way of handling devops and systems-level work at your company in front of others so that problems can be solved and improved upon collaboratively.

If something breaks at a tech company, a traditional process might look something like this:

  1. Something breaks.
  2. Whoever's on-call gets paged.
  3. They SSH into... something.
  4. They fix it... somehow.

There's not a lot of transparency. Even if you discuss it after the fact, the process that gets relayed to you might not be comprehensive enough for you to really understand what's going on. Instead, GitHub and other companies have a flow more like this:

  1. Something breaks.
  2. Whoever's on-call gets paged.
  3. They gather information in a chat room.
  4. They fix it through shared tooling, in that chat room, in front of (or leveraging the help of) other employees.

This brings us a number of benefits. For one, you can learn by osmosis. I'm not on the Ops team, but occasionally I'll stick my head into their chat room and see how they tackle a problem, even if it's a problem I won't face in my normal day-to-day work. I gain context around how they approach problems.

What's more, if others are studying how they tackle a problem in real-time, the process lends itself to improvement. How many times have you sat down to pair program with someone and were blown away by the one or two keystrokes they use to solve a process that takes you three minutes? If you code in a vacuum, you don't have the opportunity to make quick improvements. If I'm watching you run the same three commands in order to run diagnostics on a server, it's easier as a bystander to think, hey, why don't we wrap those commands up in one command that does it all for us? Those insights can be incredibly valuable and, in time, lead to massive, massive productivity and quality improvements.

This requires some work on tooling, of course. We use Hubot to act as a sort of shared collection of shell scripts that allow us to quickly address problems in our infrastructure. Some of those scripts include hooks into PagerDuty to trigger pages, code that leverages APIs into AWS or our own datacenter, and, of course, scripts that let us file issues and work with our repositories and teams on GitHub. We have hundreds or thousands of commands now, all gradually built up and hardened over time. The result is an incredible amount of tooling around automating our response to potential downtime.

This isn't limited to just working in chatrooms, though. Recently the wifi broke at our office on a particular floor. We took the same approach to fix it, except it was in a GitHub issue instead of real-time chat. Our engineers working on the problem pasted in screenshots of the status of our routers, the heatmaps of dead zones stemming from the downtime, and eventually traced cables through switches until we found a faulty one, taking photos each step of the way and adding them to the issue so we had a papertrail of which cabinets and components were affected. It's amazing how much you can learn from such a non-invasive process. If I'm not interested in learning the nitty-gritty details, I can skip the thread. But if I do want to learn about it... it's all right there, waiting for me.

feedback

The Blue Angels are a United States Navy demonstration flight squadron. They fly in air shows around the world, maneuvering in their tight six-fighter formations 18 inches apart from one another. The talent they exhibit is mind-boggling.

Earlier this year I stumbled on a documentary on their squadron from years back. There's a specific 45-second section in it that really made me think. It describes the process the Blue Angels go through in order to give each other feedback. (That small section of the video is embedded to the left. Watch the whole thing if you want, though, who am I to stop you.)

So first of all, they're obviously and patently completely nuts. The idea that you can give brutally honest feedback without worrying about interpersonal relationships is, well, not really relevant to the real world. They're superhuman. It's not every day you can tell your boss that she fucked up and skip out of the meeting humming your favorite tune without fear of repercussions. So they're nuts. But it does make sense: a mistake at their speeds and altitude is almost certainly fatal. A mistake for us, while writing software that helps identify which of your friends liked that status update about squirrels, is decidedly less fatal.

But it's still a really interesting ideal to look up to. They do feedback and retrospectives that take twice as long as the actual event itself. And they take their job of giving and receiving feedback seriously. How can we translate this idealized version of feedback in our admittedly-less-stressful gigs?

Part of this is just getting better at receiving feedback. I'm fucking horrible at this. You do have to have a bit of a thicker skin. And it sucks! No one wants to spend a few hours — or days, or months — working on something, only to inevitably get the drive-by commenter who finds a flaw in it (either real or imagined). It's sometimes difficult to not take that feedback personally. That you failed. That you're not good enough to get it perfect on the first or second tries. It's funny how quickly we forget how iterative software development is, and that computers are basically stacking the deck against us to never get anything correct on the first try.

Taking that into account, though, it becomes clear how important giving good feedback is. And sometimes this is just as hard to do. I mean, someone just pushed bad code! To your project! To your code! I mean, you're even in the damn OWNERS file! The only option is to rain fire and brimestone and hate and loathing on this poor sod, the depths of which will cause him to surely think twice about committing such horrible code and, if you're lucky, he'll quit programming altogether and become a dairy farmer instead. Fuck him!

Of course, this isn't a good approach to take. Almost without fail, if someone's changing code, they have a reason for it. It may not be a good reason, or the implementation might be suspect, but it's reason nonetheless. And being cognizant of that can go a long way towards pointing them in the right direction. How you piece your words together are terribly important.

And this is something you should at least think about, if not explicitly codify across your whole development team entirely. What do you consider good feedback? How can you promote understanding and positive approaches in your criticism of the code? How can you help the submitter learn and grow from this scenario? Unfortunately these questions don't get asked enough, which creates a self-perpetuating cycle of cynics and aggressive discussion.

That sucks. Do better.

move fast with a degree of caution

Building software is hard. Because yeah, moving quickly means you can do more for people. It means you can see what works and what doesn't work. It means your company sees real progress quicker.

But sometimes it's just as important to know what not to break. And when you work on those things, you change how you operate so that you can try to get the best of both worlds.

Also, try flying fighter jets sometimes. It can't be that hard.

October 03, 2014 12:00 AM

October 01, 2014

Greg Linden

Quick links

What caught my attention lately:
  • 12% of Harvard is enrolled in CS 50: "In pretty much every area of study, computational methods and computational thinking are going to be important to the future" ([1])

  • Excellent "What If?" nicely shows the value of back-of-the-envelope calculations and re-thinking what exactly it is you want to do ([1])

  • The US has almost no competition, only local monopolies, for high speed internet ([1] [2])

  • You can't take two large, dysfunctional, underperforming organizations, mash them together, and somehow make diamonds. When you take two big messes and put them together, you just get a bigger mess. ([1])

  • "Yahoo was started nearly 20 years ago as a directory of websites ... At the end of 2014, we will retire the Yahoo Directory." ([1] [2])

  • Investors think that Yahoo is essentially worthless ([1])

  • "At a moment when excitement about the future of robotics seems to have reached an all-time high (just ask Google and Amazon), Microsoft has given up on robots" ([1])

  • "Firing a bunch of tremendously smart and creative people seems misguided. But hey—at least they own Minecraft!" ([1])

  • "Macs still work basically the same way they did a decade ago, but iPhones and iPads have an interface that's specifically designed for multi-touch screens" ([1] [2])

  • On the difficulty of doing startups ([1] [2])

  • "Be glad some other sucker is fueling the venture capital fire" ([1])

  • "Just how antiquated the U.S. payments system has become" ([1])

  • Is everyone grabbing money from online donations to charities? Visa's charge fee on charities is only 1.35%, but the lowest online payment system for charities charges 2.2% and most charge much more than that. ([1])

  • "For most people, the risk of data loss is greater than the risk of data theft" ([1])

  • Password recovery "security questions should go away altogether. They're so dangerous that many security experts recommend filling in random gibberish instead of real answers" ([1])

  • Brilliantly done, free, open source, web-based puzzle game with wonderfully dark humor about ubiquitous surveillance ([1])

  • How Udacity does those cool transparent hands in its videos ([1])

  • There's just a bit of interference when you move your hand above the phone, just enough interference to detect gestures without using any additional power or sensors ([1] [2])

  • Small, low power wireless devices powered by very small fluctuations in temperature ([1] [2])

  • Cute intuitive interface for transferring data between PC and mobile ([1] [2])

  • "Federal funding for biomedical research [down 20%] ... forcing some people out of science altogether" ([1])

  • Another fun example of virtual tourism ([1])

  • Ig Nobel Prizes: "Dogs prefer to align themselves to the Earth's north-south magnetic field while urinating and defecating" ([1])

  • Xkcd: "In CS, it can be hard to explain the difference between the easy and the virtually impossible" ([1] [2])

  • Dilbert: "That process sounds like a steaming pile of stupidity that will beat itself to death in a few years" ([1])

  • Dilbert on one way to do job interviews ([1])

  • The Onion: "Startup Very Casual About Dress Code, Benefits" ([1])

  • Hilarious South Park episode, "Go Fund Yourself", makes fun of startups ([1])

by Greg Linden (noreply@blogger.com) at October 01, 2014 07:08 PM

September 22, 2014

Giles Bowkett

A Pair of Quick Animations

Five seconds or less, done in Adobe After Effects.

I made the music for this one. No special effects, just shapes, luma masks, and blending modes.



This one is mostly special effects.


by Giles Bowkett (noreply@blogger.com) at September 22, 2014 10:40 AM

September 20, 2014

Giles Bowkett

Why Scrum Should Basically Just Die In A Fire

Conversations with Panda Strike CEO Dan Yoder inspired this blog post.

Scrum, the Agile methodology allegedly favored by Google and Spotify, is a mess.

Consider story points. If you're not familiar with Scrum, here's how they work: you play a game called "Planning Poker," where somebody calls out a task, and then counts down from three to one. On one, the engineers hold up a card with the number of "story points" which represents the relative cost they estimate for performing the task.

So, for example, a project manager might say "integrating our login system with OpenAuth and Bitcoin," and you might put up the number 20, because it's the maximum allowable value.



Wikipedia describes the goal of this game:

The reason to use Planning Poker is to avoid the influence of the other participants. If a number is spoken, it can sound like a suggestion and influence the other participants' sizing. Planning Poker should force people to think independently and propose their numbers simultaneously. This is accomplished by requiring that all participants show their card at the same time.

I have literally never seen Planning Poker performed in a way which fails to undermine this goal. Literally always, as soon as every engineer has put up a particular number, a different, informal game begins. If it had a name, this informal game would be called something like "the person with the highest status tells everybody else what the number is going to be." If you're lucky, you get a variant called "the person with the highest status on the dev team tells everybody else what the number is going to be," but that's as good as it gets.

Wikipedia gives the alleged structure of this process:
  • Everyone calls their cards simultaneously by turning them over.
  • People with high estimates and low estimates are given a soap box to offer their justification for their estimate and then discussion continues.
  • Repeat the estimation process until a consensus is reached. The developer who was likely to own the deliverable has a large portion of the "consensus vote", although the Moderator can negotiate the consensus.
  • To ensure that discussion is structured; the Moderator or the Project Manager may at any point turn over the egg timer and when it runs out all discussion must cease and another round of poker is played. The structure in the conversation is re-introduced by the soap boxes.
In practice, this "soap box" usually consists of nothing more than questions like "20? Really?". And I've never seen the whole "rinse and repeat" aspect of Planning Poker actually happen; usually, the person with lower status simply agrees to whatever the person with higher status wants the number to be.

In fairness to everybody who's tried this process and seen it fail, how could it not devolve? A nontechnical participant has, at any point, the option to pull out an egg timer and tell technical participants "thirty seconds or shut the fuck up." This is not a process designed to facilitate technical conversation; it's so clearly designed to limit such conversation that it almost seems to assume that any technical conversation is inherently dysfunctional.

It's ironic to see conversation-limiting devices built into Agile development methodologies, when one of the core principles of the Agile Manifesto is the idea that "the most efficient and effective method of conveying information to and within a development team is face-to-face conversation," but I'll get to that a little later on.

For now, I want to point out that Planning Poker isn't the only aspect of Scrum which, in my experience, seems to consistently devolve into something less useful. Another core piece of Scrum is the standup meeting.



You probably know this, but just in case, the idea is that the team for a particular project gathers daily, for a quick, 15-minute meeting. This includes devs, QA, project manager(s), designers, and anyone else who will be working to make the project succeed, or who needs to stay up-to-date with the project's progress. The standup's designed to counter an even older tradition of long, stultifying, mandatory meetings, where a few people talk, and everybody else loses their whole day to no benefit whatsoever. Certainly, if you've got that going on in your company, anything which gets rid of it is an improvement.

However, as with Planning Poker, the 15-minute standup decays very easily. I've twice seen the "15-minute standup" devolve into half-hour or hour-long meetings where everybody stands, except for management.

At one company, a ponderous, overcomplicated web app formed the centerpiece of the company's Scrum implementation. Somebody had to sit to operate this behemoth, and since that was an informal privilege, it usually went to whomever could command it. In other words: management.

At another company, the Scrum decay took a different route. As with the egg timer in Planning Poker, Scrum standups offer an escape clause. In standups, you can defer discussion of involved topics to a "parking lot," which is where an issue lands if it's too complex to fit within the meeting's normal 15-minute parameters (which also include some constraints on what you can discuss, to prevent talkative or unfocused people from over-lengthening the meeting).

At this second company, virtually everything landed in the parking lot, and it became normal for the 15-minute standup to be a 15-minute prelude to a much longer meeting. We'd just set the agenda during the standup, and the parking lot would be the actual meeting. These standups typically took place in a particular person's office. Since arriving at the parking lot meant the standup was over, that person, whose office we were in, would feel OK about sitting down in their own, personal chair. But the office wasn't big enough to bring any new chairs into, so everyone else had to stand. The person whose office we were always in? A manager.

Scrum's standups are designed to counteract an old tradition of overly long, onerous, dull meetings. However, at both these companies, they replaced that ancient tradition with a new tradition of overly long, onerous, dull meetings where management got to sit down, and everybody else had to stand. Scrum's attempt at creating a more egalitarian process backfired, twice, in each case creating something more authoritarian instead.

To be fair to Scrum, it's not intended to work that way, and there's an entire subgenre of "Agile coaching" consultants whose job is to repair broken Scrum implementations at various companies. This is pure opinion, but my guess is that's a very lucrative market, because as far as I can tell, Scrum implementations often break.


I recommend just skimming the first few seconds of this.

Scrum's ready devolution springs from major conceptual flaws.

Scrum's an Agile development methodology, and one of its major goals is sustainable development. However, it works by time-boxing efforts into iterations of a week or two in length, and refers to these iterations as "sprints." Time-boxed iterations are very useful, but there's a fundamental cognitive dissonance between "sprints" and "sustainable development," because there is no such thing as a sustainable sprint.


This man's pace is probably not optimized for sustainability.

Likewise, your overall list of goals, features, and work to accomplish is referred to as the "backlog." This is true even on a greenfield project. On day 1, you have a backlog.

Another core idea of the Agile Manifesto, the allegedly defining document for Agile development methodologies: "working software is the primary measure of progress." Scrum disregards this idea in favor of a measure of progress called "velocity." Basically, velocity is the number of "story points" successfully accomplished divided by the amount of time it took to accomplish them.

As I mentioned at the top of the post, a lot of this thinking comes from conversations with my new boss, Panda Strike CEO Dan Yoder. Dan told me he's literally been in meetings where non-technical management said things like, "well, you got through [some number] story points last week, and you only got through [some smaller number] this week, and coincidentally, I noticed that [some developer's name] left early yesterday, so it looks pretty easy who to blame."

Of course, musing, considering, mulling things over, and coming to realizations all constitute a significant amount of the actual work in programming. It is impossible to track whether these realizations occur in the office or in the shower. Anecdotally, it's usually the shower. Story points, meanwhile, are completely made-up numbers designed to capture off-the-cuff estimates of relative difficulty. Developers are explicitly encouraged to think of story points as non-binding numbers, yet velocity turns those non-binding estimates into a number they can be held accountable for, and which managers often treat as a synonym for productivity. "Agile" software exists to track velocity, as if it were a meaningful metric, and to compare the relative velocity of different teams within the same organization.

This is an actual thing which sober adults do, on purpose, for a living.

"Velocity" is really too stupid to examine in much further detail, because it very obviously disregards this whole notion of "working software as a measure of progress" in favor of completely unreliable numbers based on almost nothing. (I'm not proud to admit that I've been on a team where we spent an entire month to build an only mostly-functional shopping cart, but I suppose it's some consolation that our velocity was acceptable at the time.)

But, just to be clear, one of velocity's many flaws is that different teams are likely to make different off-the-cuff estimates, as are different members of the same team. Because of this, you can only really garner anything approaching meaningful insight from these numbers if you compare the ratio of estimated story points to accomplished story points on a per-team, per-week basis. Or, indeed, a per-individual, per-week one. And even then, you're more likely to learn something about a team's or individual's ability to make ballpark estimates than their actual productivity.

Joel Spolsky has an old but interesting blog post about a per-individual, velocity-like metric based on actually using math like a person who understands it, not a person who regards it as some kind of incomprehensible yet infallible magic. However, if there's anything worth keeping in the Agile Manifesto, it's the idea that working software is the primary measure of progress. Indeed, that's the huge, hilarious irony at the center of this bizarre system of faux accountability: with the exception of a few Heisenbugs, engineering work is already inherently more accountable than almost any other kind of work. If you ask for a feature, your team will either deliver it, or fail to deliver it, and you will know fairly rapidly.

If you're tracking velocity, your best-case scenario will be that management realizes it means nothing, even though they're tracking it anyway, which means spending money and time on it. This useless expense is what Andy Hunt and Dave Thomas termed a broken window in their classic book The Pragmatic Programmer - a sign of careless indifference, which encourages more of the same. That's not what you want to have in your workplace.



Sacrificing "working software as a measure of progress" to meaningless numbers that your MBAs can track for no good reason is a pretty serious flaw in Scrum. It implies that Scrum's loyalty is not to the Agile Manifesto, nor to working software, nor high-quality software, nor even the success of the overall team or organization. Scrum's loyalty, at least as it pertains to this design decision, is to MBAs who want to point at numbers on a chart, whether those numbers mean anything or not.

I've met very nice MBAs, and I hope everyone out there with an MBA gets to have a great life and stay employed. However, building an entire software development methodology around that goal is, in my opinion, a silly mistake.

The only situation I can think of where a methodology like Scrum could have genuine usefulness is on a rescue engagement, where you're called in as a consultant to save a failing project. In a situation like this, you can track velocity on a team basis to show your CEO client that development's speeding up. Meanwhile, you work on the real question, which is who to fire, because that's what nearly every rescue project comes down to.

In other words, in its best-case scenario, Scrum's a dog-and-pony show. But that best-case scenario is rare. In the much more common case, Scrum covers up the inability to recruit (or even recognize) engineering talent, which is currently one of the most valuable things in the world, with a process for managing engineers as if they were cogs in a machine, all of equal value.

And one of the most interesting things about Scrum is that it tries to enhance the accountability of a field of work where both failure and success are obvious to the naked eye - yet I've never encountered any similarly elaborate system of rituals whose major purpose is to enhance the accountability of fields which have actual accountability problems.



Although marketing is becoming a very data-driven field, and although this sea change began long before the Web existed at all - Dan Kennedy's been writing about data-driven marketing since at least the 1980s - it's still a fact that many marketers do totally unaccountable work that depends entirely on public perception, mood, and a variety of other factors that are inherently impossible to measure. The oldest joke in marketing: "only half my advertising works, but I don't know which half."

And you never will.


YouTube ads have tried to sell me a service to erase the criminal record I don't have. They've reminded me to use condoms during the gay sex that I don't have either. They've also tried to get me to buy American trucks and country music, neither of which will ever happen. No disrespect to the gay ex-convicts out there who do like American trucks and country music, assuming for the sake of argument that this demographic even exists, it's just not my style. Similarly, Facebook's "targeted" ads usually come from politicians I dislike, and Google's state-of-the-art, futuristic, probablistic, "best-of-breed" ads are worse. The only time they try to sell me anything I even remotely want is when I've researched something expensive but decided not to buy it yet. Then the ad follows me around every web site I visit for the next month.


Please buy it. Please. You looked at it once.

Even in 2014, marketing involves an element of randomness, and probably always will, until the end of time.

Anyway, Scrum gives you demeaning rituals to dumb down your work so that people who will never understand it can pretend to understand it. Meanwhile, work which is genuinely difficult to track doesn't have to deal with this shit.

Why?

I don't think highly of Scrum, but the problem here goes deeper. The Agile Manifesto is flawed too. Consider this core principle of Agile development: "business people and developers must work together."

Why are we supposed to think developers are not business people?

If you join (or start) a startup, you may have to do marketing before your company can hire a marketing person. The same is true for accounting, for sales, for human resources, and for just about anything that any reasonable person would call business. You're in a similar situation if you freelance or do consulting. You're definitely in a better position for any of these things if you hire someone who knows what they're doing, of course, but there's a large number of developers who are also business people.

Perhaps more importantly, if you join or start a startup, you can knock the engineering out of the park and still end up flat fucking broke if the marketing people don't do a good job. But you're probably not going to demand that your accountants or your marketing people jump through bizarre, condescending hoops every day. You're just going to trust them to do their jobs.

This is a reasonable way to treat engineers as well.

By the way, despite that little Dilbert strip a few paragraphs above, my job title at Panda Strike is Minister of Propaganda. I'm basically the Director of Marketing, except that to call yourself a Director of Marketing is itself very bad marketing when you want to communicate with developers, who traditionally mistrust marketing for various reasons (many of them quite legitimate). This is the same reason the term "growth hacker" exists, but as a job title, that phrase just reeks of dishonesty. So I went with Minister of Propaganda to acknowledge the vested interest I have in saying things which benefit my company.

However, despite having marketing responsibilities, my first act upon joining Panda Strike was to write code which evaluates code. I tweaked my git analysis scripts to produce detailed profiles of the history of many of the company's projects, both open source and internal products, so that I could get a very specific picture of how development works at Panda Strike, and how our projects have been built, and who built them, and when, and with which technologies, and so on.

As an aside, I first developed this technique on a Rails rescue project. It was the first thing I did on the project, but the CTO, having an arrogant and aloof attitude, had no idea. So on my first day, after I did this work, he introduced me to the rest of the team, telling me their names, but nothing else about them. But I recognized the names from my analysis of the git log. I noticed that the number one JavaScript committer had a cynical and sarcastic expression, that most of the team had three commits or less, and that the number one Ruby committer wasn't anywhere in the building.

This CTO who had told me nothing then said to me, "OK, dazzle me." As you can imagine, I did not dazzle him. I fired him. (Or, more accurately, I and my colleagues persuaded his CEO to fire him.)


Anway, the whole point of this is simple: there's absolutely no reason to assume that a developer is not a business person. It's a ridiculous assumption, and the world is full of incredibly successful counterexamples.

The Agile Manifesto might also be to blame for the Scrum standup. It states that "the most efficient and effective method of conveying information to and within a development team is face-to-face conversation." In fairness to the manifesto's authors, it was written in 2001, and at that time git log did not yet exist. However, in light of today's toolset for distributed collaboration, it's another completely implausible assertion, and even back in 2001 you had to kind of pretend you'd never heard of Linux if you really wanted it to make sense.

Well-written text very often trumps face-to-face communication. You can refer to well-written text later, instead of relying on your memory. You can't produce well-written text unless you think carefully. Also, technically speaking, you can literally never produce good code in the first place unless you produce well-written text. There are several great presentations from GitHub on the value of asynchronous communication, and they're basically required viewing for anybody who wants to work as a programmer, or with programmers.

In fact, GitHub itself was built without face-to-face communication. Basecamp was built without face-to-face communication as well. I'm not saying these people never met each other, but most of the work was done remote. Industrial designer Marc Newson works remote for Apple, so his work on the Apple Watch may also have happened without face-to-face communication. And face-to-face communication plays a minimal role in the majority of open source projects, which usually outperform commercial projects in terms of software quality.

In addition to defying logic and available evidence, both these Agile Manifesto principles encourage a kind of babysitting mentality. I've never seen Scrum-like frameworks for transmuting the work of designers, marketers, or accountants into cartoonish oversimplifications like story points. People are happy to treat these workers as adults and trust them to do their jobs.

I don't know why this same trust does not prevail in the culture of managing programmers. That's a question for another blog post. I suspect that the reasons are historical, and fundamentally irrelevant, because it really doesn't matter. If you're not doing well at hiring engineers, the answer is not a deeply flawed methodology which collapses under the weight of its own contradictions on a regular basis. The answer is to get better at hiring engineers, and ultimately to get great at it.

I may do a future blog post on this, because it's one of the most valuable skills in the world.



Credit where credit's due: the Agile Manifesto helped usher in a vital paradigm shift, in its day.

by Giles Bowkett (noreply@blogger.com) at September 20, 2014 06:56 PM

Decyphering Glyph

Ungineering

I am not an engineer.

I am a computer programmer. I am a software developer. I am a software author. I am a coder.

I program computers. I develop software. I write software. I code.

I’d prefer that you not refer to me as an engineer, but this is not an essay about how I’m going to heap scorn upon you if you do so. Sometimes, I myself slip and use the word “engineering” to refer to this activity that I perform. Sometimes I use the word “engineer” to refer to myself or my peers. It is, sadly, fairly conventional to refer to us as “engineers”, and avoiding this term in a context where it’s what everyone else uses is a constant challenge.

Nevertheless, I do not “engineer” software. Neither do you, because nobody has ever known enough about the process of creating software to “engineer” it.

According to dictionary.com, “engineering” is:

“the art or science of making practical application of the knowledge of pure sciences, as physics or chemistry, as in the construction of engines, bridges, buildings, mines, ships, and chemical plants.”

When writing software, we typically do not apply “knowledge of pure sciences”. Very little science is germane to the practical creation of software, and the places where it is relevant (firmware for hard disks, for example, or analytics for physical sensors) are highly rarified. The one thing that we might sometimes use called “science”, i.e. computer science, is a subdiscipline of mathematics, and not a science at all. Even computer science, though, is hardly ever brought to bear - if you’re a working programmer, what was the last project where you had to submit formal algorithmic analysis for any component of your system?

Wikipedia has a heaping helping of criticism of the terminology behind software engineering, but rather than focusing on that, let's see where Wikipedia tells us software engineering comes from in the first place:

The discipline of software engineering was created to address poor quality of software, get projects exceeding time and budget under control, and ensure that software is built systematically, rigorously, measurably, on time, on budget, and within specification. Engineering already addresses all these issues, hence the same principles used in engineering can be applied to software.

Most software projects fail; as of 2009, 44% are late, over budget, or out of specification, and an additional 24% are cancelled entirely. Only a third of projects succeed according to those criteria of being under budget, within specification, and complete.

What would that look like if another engineering discipline had that sort of hit rate? Consider civil engineering. Would you want to live in a city where almost a quarter of all the buildings were simply abandoned half-constructed, or fell down during construction? Where almost half of the buildings were missing floors, had rents in the millions of dollars, or both?

My point is not that the software industry is awful. It certainly can be, at times, but it’s not nearly as grim as the metaphor of civil engineering might suggest. Consider this: despite the statistics above, is using a computer today really like wandering through a crumbling city where a collapsing building might kill you at any moment? No! The social and economic costs of these “failures” is far lower than most process consultants would have you believe. In fact, the cause of many such “failures” is a clumsy, ham-fisted attempt to apply engineering-style budgetary and schedule constraints to a process that looks nothing whatsoever like engineering. I have to use scare quotes around “failure” because many of these projects classified as failed have actually delivered significant value. For example, if the initial specification for a project is overambitious due to lack of information about the difficulty of the tasks involved, for example – an extremely common problem at the beginning of a software project – that would still be a failure according to the metric of “within specification”, but it’s a problem with the specification and not the software.

Certain missteps notwithstanding, most of the progress in software development process improvement in the last couple of decades has been in acknowledging that it can’t really be planned very far in advance. Software vendors now have to constantly present works in progress to their customers, because the longer they go without doing that there is an increasing risk that the software will not meet the somewhat arbitrary goals for being “finished”, and may never be presented to customers at all.

The idea that we should not call ourselves “engineers” is not a new one. It is a minority view, but I’m in good company in that minority. Edsger W. Dijkstra points out that software presents what he calls “radical novelty” - it is too different from all the other types of things that have come before to try to construct it by analogy to those things.

One of the ways in which writing software is different from engineering is the matter of raw materials. Skyscrapers and bridges are made of steel and concrete, but software is made out of feelings. Physical construction projects can be made predictable because the part where creative people are creating the designs - the part of that process most analagous to software - is a small fraction of the time required to create the artifact itself.

Therefore, in order to create software you have to have an “engineering” process that puts its focus primarily upon the psychological issue of making your raw materials - the brains inside the human beings you have acquired for the purpose of software manufacturing - happy, so that they may be efficiently utilized. This is not a common feature of other engineering disciplines.

The process of managing the author’s feelings is a lot more like what an editor does when “constructing” a novel than what a foreperson does when constructing a bridge. In my mind, that is what we should be studying, and modeling, when trying to construct large and complex software systems.

Consequently, not only am I not an engineer, I do not aspire to be an engineer, either. I do not think that it is worthwhile to aspire to the standards of another entirely disparate profession.

This doesn’t mean we shouldn’t measure things, or have quality standards, or try to agree on best practices. We should, by all means, have these things, but we authors of software should construct them in ways that make sense for the specific details of the software development process.

While we are on the subject of things that we are not, I’m also not a maker. I don’t make things. We don’t talk about “building” novels, or “constructing” music, nor should we talk about “building” and “assembling” software. I like software specifically because of all the ways in which it is not like “making” stuff. Making stuff is messy, and hard, and involves making lots of mistakes.

I love how software is ethereal, and mistakes are cheap and reversible, and I don’t have any desire to make it more physical and permanent. When I hear other developers use this language to talk about software, it makes me think that they envy something about physical stuff, and wish that they were doing some kind of construction or factory-design project instead of making an application.

The way we use language affects the way we think. When we use terms like “engineer” and “builder” to describe ourselves as creators, developers, maintainers, and writers of software, we are defining our role by analogy and in reference to other, dissimilar fields.

Right now, I think I prefer the term “developer”, since the verb develop captures both the incremental creation and ongoing maintenance of software, which is so much a part of any long-term work in the field. The only disadvantage of this term seems to be that people occasionally think I do something with apartment buildings, so I am careful to always put the word “software” first.

If you work on software, whichever particular phrasing you prefer, pick one that really calls to mind what software means to you, and don’t get stuck in a tedious metaphor about building bridges or cars or factories or whatever.

To paraphrase a wise man:

I am developer, and so can you.

by Glyph at September 20, 2014 05:00 AM

September 18, 2014

Simplest Thing (Bill Seitz)

Start a Movement! Lead a Tribe! – But Only as a Last Resort | OnTheSpiral

Start a Movement! Lead a Tribe! – But Only as a Last Resort | OnTheSpiral:

Gregory Rader on the danger of jumping into starting a movement/tribe without actually having a solution to spread…

September 18, 2014 02:59 PM

September 17, 2014

Decyphering Glyph

The Most Important Thing You Will Read On This Blog All Year

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

I have two PGP keys.

One, 16F13480, is signed by many people in the open source community.  It is a
4096-bit RSA key.

The other, 0FBC4A07, is superficially worse.  It doesn't have any signatures on
it.  It is only a 3072-bit RSA key.

However, I would prefer that you all use 0FBC4A07.

16F13480 lives encrypted on disk, and occasionally resident in memory on my
personal laptop.  I have had no compromises that I'm aware of, so I'm not
revoking the key - I don't want to lose all the wonderful trust I build up.  In
order to avoid compromising it in the future, I would really prefer to avoid
decrypting it any more often than necessary.

By contrast, aside from backups which I have not yet once had occasion to
access, 0FBC4A07 exists only on an OpenPGP smart card, it requires a PIN, it is
never memory resident on a general purpose computer, and is only plugged in
when I'm actively Doing Important Crypto Stuff.  Its likelyhood of future
compromise is *extremely* low.

If said smart card had supported 4096-bit keys I probably would have just put
the old key on the more secure hardware and called it a day.  Sadly, that is
not the world we live in.

Here's what I'd like you to do, if you wish to interact with me via GnuPG:

    $ gpg --recv-keys 0FBC4A07 16F13480
    gpg: requesting key 0FBC4A07 from hkp server keys.gnupg.net
    gpg: requesting key 16F13480 from hkp server keys.gnupg.net
    gpg: key 0FBC4A07: "Matthew "Glyph" Lefkowitz (OpenPGP Smart Card) <glyph@twistedmatrix.com>" 1 new signature
    gpg: key 16F13480: "Matthew Lefkowitz (Glyph) <glyph@twistedmatrix.com>" not changed
    gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
    gpg: depth: 0  valid:   2  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 2u
    gpg: next trustdb check due at 2015-08-18
    gpg: Total number processed: 2
    gpg:              unchanged: 1
    gpg:         new signatures: 1
    $ gpg --edit-key 16F13480
    gpg (GnuPG/MacGPG2) 2.0.22; Copyright (C) 2013 Free Software Foundation, Inc.
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.


    gpg: checking the trustdb
    gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
    gpg: depth: 0  valid:   2  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 2u
    gpg: next trustdb check due at 2015-08-18
    pub  4096R/16F13480  created: 2012-11-16  expires: 2016-04-12  usage: SC
                         trust: unknown       validity: unknown
    sub  4096R/0F3F064E  created: 2012-11-16  expires: 2016-04-12  usage: E
    [ unknown] (1). Matthew Lefkowitz (Glyph) <glyph@twistedmatrix.com>

    gpg> disable

    gpg> save
    Key not changed so no update needed.
    $

If you're using keybase, "keybase encrypt glyph" should be pointed at the
correct key.

Thanks for reading,

- -glyph
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQGcBAEBCgAGBQJUGRfNAAoJEH7CgSUPvEoHwg8L/0MHoG4FLzr1U3Ulu45sX/QO
VDUC4wJp4dpUKW2Yvjyw3LBYtFvsJfqUhM2oBURDPPgVfC5aOz7qevuBndlOYPB+
8dK//lPLZvYMAx2AlTGhz0wQokl0Cdlo+vK5E+Ex5oDJYhaPI9YPsSDbvynb6yhI
DK+EXRBtra7ev4hHDiucLGvqlSQnV+eOSijZfHgm6aBImfMUM7SM3UGFtE8oEJDE
XhTwW93L/c2epZOEFkfSLzQLlIcV5Ll2B6KOQLsdMuvgSkVX3NN+efuLFy4diD0U
HvL8nxxBpM98Jj+0PAucLbw4JwyAwEF6viEwXWiwngFTKeU60kUjUoSMFecMQuEz
IFqR7e9J7OaK9pMLrimwYtfCHVCx5WIXRJShuvcHhRCjwJb1N6rAffGOiwzkY+3w
mvpfEwQoC8F1wu2SOUpgQlvHUxxYbdibNTUvlaO74nyzE9fQZSXbZcGH2Skf9tHi
DjPhd2kLnyOzOy+BAOGcTKJ+ldOpbdmsnpcFTDA/MA==
=k0u9
-----END PGP SIGNATURE-----

by Glyph at September 17, 2014 05:14 AM

September 15, 2014

Decyphering Glyph

The Horizon

Sometimes, sea sickness is caused by a sort of confusion. Your inner ear can feel the motion of the world around it, but because it can’t see the outside world, it can’t reconcile its visual input with its proprioceptive input, and the result is nausea.

This is why it helps to see the horizon. If you can see the horizon, your eyes will talk to your inner ears by way of your brain, they will realize that everything is where it should be, and that everything will be OK.

As a result, you’ll stop feeling sick.

photo credit: https://secure.flickr.com/people/reallyterriblephotographer/

I have a sort of motion sickness too, but it’s not seasickness. Luckily, I do not experience nausea due to moving through space. Instead, I have a sort of temporal motion sickness. I feel ill, and I can’t get anything done, when I can’t see the time horizon.

I think I’m going to need to explain that a bit, since I don’t mean the end of the current fiscal quarter. I realize this doesn’t make sense yet. I hope it will, after a bit of an explanation. Please bear with me.

Time management gurus often advise that it is “good” to break up large, daunting tasks into small, achievable chunks. Similarly, one has to schedule one’s day into dedicated portions of time where one can concentrate on specific tasks. This appears to be common sense. The most common enemy of productivity is procrastination. Procrastination happens when you are working on the wrong thing instead of the right thing. If you consciously and explicitly allocate time to the right thing, then chances are you will work on the right thing. Problem solved, right?

Except, that’s not quite how work, especially creative work, works.

ceci n’est pas un task

I try to be “good”. I try to classify all of my tasks. I put time on my calendar to get them done. I live inside little boxes of time that tell me what I need to do next. Sometimes, it even works. But more often than not, the little box on my calendar feels like a little cage. I am inexplicably, involuntarily, haunted by disruptive visions of the future that happen when that box ends.

Let me give you an example.

Let’s say it’s 9AM Monday morning and I have just arrived at work. I can see that at 2:30PM, I have a brief tax-related appointment. The hypothetical person doing my hypothetical taxes has an office that is a 25 minute hypothetical walk away, so I will need to leave work at 2PM sharp in order to get there on time. The appointment will last only 15 minutes, since I just need to sign some papers, and then I will return to work. With a 25 minute return trip, I should be back in the office well before 4, leaving me plenty of time to deal with any peripheral tasks before I need to leave at 5:30. Aside from an hour break at noon for lunch, I anticipate no other distractions during the day, so I have a solid 3 hour chunk to focus on my current project in the morning, an hour from 1 to 2, and an hour and a half from 4 to 5:30. Not an ideal day, certainly, but I have plenty of time to get work done.

The problem is, as I sit down in front of my nice, clean, empty text editor to sketch out my excellent programming ideas with that 3-hour chunk of time, I will immediately start thinking about how annoyed I am that I’m going to get interrupted in 5 and a half hours. It consumes my thoughts. It annoys me. I unconsciously attempt to soothe myself by checking email and getting to a nice, refreshing inbox zero. Now it’s 9:45. Well, at least my email is done. Time to really get to work. But now I only have 2 hours and 15 minutes, which is not as nice of an uninterrupted chunk of time for a deep coding task. Now I’m even more annoyed. I glare at the empty window on my screen. It glares back. I spend 20 useless minutes doing this, then take a 10-minute coffee break to try to re-set and focus on the problem, and not this silly tax meeting. Why couldn’t they just mail me the documents? Now it’s 10:15, and I still haven’t gotten anything done.

By 10:45, I manage to crank out a couple of lines of code, but the fact that I’m going to be wasting a whole hour with all that walking there and walking back just gnaws at me, and I’m slogging through individual test-cases, mechanically filling docstrings for the new API and for the tests, and not really able to synthesize a coherent, holistic solution to the overall problem I’m working on. Oh well. It feels like progress, albeit slow, and some days you just have to play through the pain. I struggle until 11:30 at which point I notice that since I haven’t been able to really think about the big picture, most of the test cases I’ve written are going to be invalidated by an API change I need to make, so almost all of the morning’s work is useless. Damn it, it’s 2014, I should be able to just fill out the forms online or something, having to physically carry an envelope with paper in it ten blocks is just ridiculous. Maybe I could get my falcon to deliver it for me.

It’s 11:45 now, so I’m not going to get anything useful done before lunch. I listlessly click on stuff on my screen and try to relax by thinking about my totally rad falcon until it’s time to go. As I get up, I glance at my phone and see the reminder for the tax appointment.

Wait a second.

The appointment has today’s date, but the subject says “2013”. This was just some mistaken data-entry in my calendar from last year! I don’t have an appointment today! I have nowhere to be all afternoon.

For pointless anxiety over this fake chore which never even actually happened, a real morning was ruined. Well, a hypothetical real morning; I have never actually needed to interrupt a work day to walk anywhere to sign tax paperwork. But you get the idea.

To a lesser extent, upcoming events later in the week, month, or even year bother me. But the worst is when I know that I have only 45 minutes to get into a task, and I have another task booked up right against it. All this trying to get organized, all this carving out uninterrupted time on my calendar, all of this trying to manage all of my creative energies and marshal them at specific times for specific tasks, annihilates itself when I start thinking about how I am eventually going to have to stop working on the seemingly endless, sprawling problem set before me.

The horizon I need to see is the infinite time available before me to do all the thinking I need to do to solve whatever problem has been set before me. If I want to write a paragraph of an essay, I need to see enough time to write the whole thing.

Sometimes - maybe even, if I’m lucky, increasingly frequently - I manage to fool myself. I hide my calendar, close my eyes, and imagine an undisturbed millennium in front of my text editor ... during which I may address some nonsense problem with malformed utf-7 in mime headers.

... during which I can complete a long and detailed email about process enhancements in open source.

... during which I can write a lengthy blog post about my productivity-related neuroses.

I imagine that I can see all the way to the distant horizon at the end of time, and that there is nothing between me and it except dedicated, meditative concentration.

That is on a good day. On a bad day, trying to hide from this anxiety manifests itself in peculiar and not particularly healthy ways. For one thing, I avoid sleep. One way I can always extend the current block of time allocated to my current activity is by just staying awake a little longer. I know this is basically the wrong way to go about it. I know that it’s bad for me, and that it is almost immediately counterproductive. I know that ... but it’s almost 1AM and I’m still typing. If I weren’t still typing right now, instead of sleeping, this post would never get finished, because I’ve spent far too many evenings looking at the unfinished, incoherent draft of it and saying to myself, "Sure, I’d love to work on it, but I have a dentist’s appointment in six months and that is going to be super distracting; I might as well not get started".

Much has been written about the deleterious effects of interrupting creative thinking. But what interrupts me isn’t an interruption; what distracts me isn’t even a distraction. The idea of a distraction is distracting; the specter of a future interruption interrupts me.

This is the part of the article where I wow you with my life hack, right? Where I reveal the one weird trick that will make productivity gurus hate you? Where you won’t believe what happens next?

Believe it: the surprise here is that this is not a set-up for some choice productivity wisdom or a sales set-up for my new book. I have no idea how to solve this problem. The best I can do is that thing I said above about closing my eyes, pretending, and concentrating. Honestly, I have no idea even if anyone else suffers from this, or if it’s a unique neurosis. If a reader would be interested in letting me know about their own experiences, I might update this article to share some ideas, but for now it is mostly about sharing my own vulnerability and not about any particular solution.

I can share one lesson, though. The one thing that this peculiar anxiety has taught me is that productivity “rules” are not revealed divine truth. They are ideas, and those ideas have to be evaluated exclusively on the basis of their efficacy, which is to say, on the basis of how much stuff that you want to get done that they help you get done.

For now, what I’m trying to do is to un-clench the fearful, spasmodic fist in my mind that gripping the need to schedule everything into these small boxes and allocate only the time that I “need” to get something specific done.

Maybe the only way I am going to achieve anything of significance is with opaque, 8-hour slabs of time with no more specific goal than “write some words, maybe a blog post, maybe some fiction, who knows” and “do something work-related”. As someone constantly struggling to force my own fickle mind to accomplish any small part of my ridiculously ambitions creative agenda, it’s really hard to relax and let go of anything which might help, which might get me a little further a little faster.

Maybe I should be trying to schedule my time into tiny little blocks. Maybe I’m just doing it wrong somehow and I just need to be harder on myself, madder at myself, and I really can get the blood out of this particular stone.

Maybe it doesn’t matter all that much how I schedule my own time because there’s always some upcoming distraction that I can’t control, and I just need to get better at meditating and somehow putting them out of my mind without really changing what goes on my calendar.

Maybe this is just as productive as I get, and I’ll still be fighting this particular fight with myself when I’m 80.

Regardless, I think it’s time to let go of that fear and try something a little different.

by Glyph at September 15, 2014 08:27 AM