I'm not young enough to know everything
Paul Graham has alluded to the difficulty of doing a startup when you're, well, not young. He puts the upper bound at thirty-eight:
So who should start a startup? Someone who is a good hacker, between about 23 and 38... I don't think many people have the physical stamina much past that age. I used to work till 2:00 or 3:00 AM every night, seven days a week. I don't know if I could do that now. Also, startups are a big risk financially. If you try something that blows up and leaves you broke at 26, big deal; a lot of 26 year olds are broke. By 38 you can't take so many risks-- especially if you have kids.
While what he says is true, lately I've become aware of an even bigger threat to starting a startup.
Experience can be a handicap.
Let's start with a digression. What makes weblog posts popular? Most people say things like "insightful" posts become popular. Or "well-written" posts become popular. Adjectives like that are tautological: if someone likes a piece of writing, they generally will find nice adjectives to apply to it.
One model for popular writing is that it
panders to the reader's prejudices. Plain and simple. People like writing that validates them and especially their ideas. I'm no different, and as a result I tend to focus my research on things that I already think I know.
When you're twenty-two this isn't much of a problem because you know you don't know. You're "consciously incompetent." So you're far more likely to find something unfamiliar and try to understand it, to change your way of thinking to match what you learn rather than applying a "bozo filter" to it in advance.
But at forty-two (or three!), it's easy to think you know things. You're at incredible risk of thinking you know things when you've achieved some measure of success, no matter how modest. You become "unconsciously incompetent." You don't know, but you don't know you don't know.
I was personally reminded of this at
startup school. As Chris Sacca pointed out:
The glow of screens (from a refreshingly Powerbook-dominated audience) revealed an array of real-time collaborative note-taking for a virtually assembling the room's minds in a concurrent recording and discussion of the event.
Sitting in that room, I was hyper-aware that I was no longer in Kansas. It struck me that if I didn't want the next generation of hackers to wipe me out like a Tsunami, I needed to stop paddling, climb on my surfboard, and accept the risk of being dashed on the rocks.
I immediately made a commitment to myself to let go of things I used to think I knew.
Matz, Jonathan Ives, and A Narrow Road to a Far ProvinceOne commitment was to try
Ruby on Rails instead of Lisp. Every time I've looked at Ruby, I've thought "nice, but it doesn't do anything I couldn't do in SmallTalk back in 1980."
I've been the most pompous hypocrite. I mean, I often poke my Windows apologist friends and tell them that efficiency and user interface is not measured with check boxes ("mouse? check. icons? check. well, they must do the same thing.") I've extolled the virtues of
design, of the interaction between the parts, of the frictionless user experience of a
Macintosh.
Do not seek to follow in the footsteps of the wise. Seek what they sought.
Matsuo Basho
Ok, Ruby's blocks and classes and closures are the same things that Lisp has given us since 1959 and SmallTalk has given us since 1980. But maybe... Maybe... Maybe Ruby on Rails is
easier to use than Lisp in the very same sense that OS X is easier to use than Windows.
As for the Rails thing, it's awfully popular and I have been uneasy about anything so hype-ridden. But how is that different from any of a thousand funny quotes deriding the new new thing? I'm tempted to say that
there's a world market for maybe five Rails applications. But maybe there's more to it.
Maybe I should find out.
Thomas Bayes, Joel Spolsky, and Richard FeynmanJoel Spolsky dropped an incredibly provocative anecdote into one of his well-written and insightful posts:
A very senior Microsoft developer who moved to Google told me that Google works and thinks at a higher level of abstraction than Microsoft. "Google uses Bayesian filtering the way Microsoft uses the if statement," he said. That's true. Google also uses full-text-search-of-the-entire-Internet the way Microsoft uses little tables that list what error IDs correspond to which help text. Look at how Google does spell checking: it's not based on dictionaries; it's based on word usage statistics of the entire Internet, which is why Google knows how to correct my name, misspelled, and Microsoft Word doesn't.
If Microsoft doesn't shed this habit of "thinking in if statements" they're only going to fall further behind.
I have an entire career built on top of experience building applications that manage ontologies, that are built out of objects and classes and tables and relations and all sorts of things that boil down to if statements. Or at least, they boil down to classifying things
in advance rather than building systems that learn over time.
I've known about Bayesian classification for years. And I've always thought of it as a specialized tool. It's incredibly disruptive to think of it as an every-day tool, as a general-purpose tool, as something that can replace the if statement.
"Your old-fashioned ideas are no damn good!"
Richard Feynman
Yet when I step out of my comfort zone, I realize that
I've seen this before (experience can be handy at times). Many of you young-uns have never known a programming world where there was no polymorphism (although judging by some of the code that has caused me to say "
WTF?", not as many as I would like). Before messages, virtual functions, and method calls there were the switches, cases, and if statements.
There was an "aha!" moment for me when I suddenly grokked polymorphism. When I understood that switch statements were junk. Maybe
Bayesian inferences can change programming the same way that polymorphism changed programming.
I need to find out.
And now I'd like to quote the other Steve, the one who isn't a hacker and wasn't presenting at startup school (Psst! Steve! Do you want to sell coloured MP3 players for the rest of your life or do you want to change the world again?)
One more thingFor a very long time I've been carrying a conjecture around. Paul Graham supplied me with the framework for thinking about the conjecture:
Treating a startup idea as a question changes what you're looking for. If an idea is a blueprint, it has to be right. But if it's a question, it can be wrong, so long as it's wrong in a way that leads to more ideas.
Paul Graham, Ideas for Startups
The "Graham Question" is:
Can we predict the future of a software development project with objective observation?
Let's take a simple example, one that ran through my head this morning. I've worked for several companies that used issue tracking systems. These systems have had a little widget/enumeration for declaring the priority of an issue. I've also worked with Scrum-like teams that maintained priority with a master list or backlog.
You want to know which issues will be fixed/implemented/done by a certain date. What is the significance of the priority?
Well, this is management 101. Start with the number of hours available for development, then take the highest priority issues and estimate how much time is required to do them. Your prediction is that the team will do as many as possible of the highest priority items in the time available.
All well and good, but in reality "Spolsky" isn't in the dictionary and neither is "Braithwaite." For that matter, neither is "p.s.", and why should I have to click "add to dictionary" after the program has
watched me type this thing and not correct it hundreds of times?And lo, if we watch an actual software project we see that over time priorities change and new issues are added to the mix and sometimes low priority items are done first for some reason, and humans just can't seem to follow the damn plan, but software emerges from the other end anyways.
It's easy to say, "your old-fashioned priority is no damn good." But maybe we are not young enough to know everything. Maybe we should ask a question: "
what good is the priority if you can't construct a nice if statement around it?"
Maybe this is like Spolsky and Braithwaite and Error IDs and Help Text. Maybe there is no formula up front but we can
watch what people do hundreds of times.
Maybe Thomas Bayes knows the significance of the priority.
Labels: passion, popular