I can probably use this like Twitter and relieve myself of the last social media I am addicted to.
Honestly social media is really just centralized blogs with RSS and a share button.
I can probably use this like Twitter and relieve myself of the last social media I am addicted to.
Honestly social media is really just centralized blogs with RSS and a share button.
If a design is taking too long then it’s the wrong design
Elon Musk
Everyday Astronaut Interview
This is by far the topic I get the most push back from engineers about. Even more than Software Engineering is Writing. They always want to go slower, more carefully. The idea of doing what they’re doing now but faster seems crazy. It is crazy. You can’t go faster doing what you’re doing now. Instead I want us to change what we’re doing to go faster.
Why should we be trying to go faster? In short, because we’re professionals.
Activity | Professional | Hobbyist |
Driving | 200 MPH | 65 MPH |
Football QB decision to pass or run | 3 sec | 10 sec |
Cleaning a 1br Apartment | 4 hr | 1 day |
Baseball fastball | 90 MPH | 50 MPH |
Programming – implementing a single screen of a prototype UI | 1 month | 2-3 days |
An almost universal difference between a professional and a hobbyist in every job is speed. Professionals are faster than amateurs.
Most of the above data comes from the book “Wait: The Art and Science of Delay” by Frank Partnoy. The title is an interesting choice as the main argument I get out of the book is: Focus on speed. Speed allows you to wait longer to take action. Waiting longer allows you to gather more information, which increases the likelihood of a better outcome. It reminds me of the Special Forces phrase: “Slow is smooth and smooth is fast”.
The last line in the table comes from a friend at Google. They told me this is how long it takes them to make a single prototype screen for a UI. I nearly spit out my coffee. I chose this because the majority of programmers I talk with are Googlepheliacs who think everything they do is “the way” and should be emulated always, everywhere.
If you want a metric, the one I’m currently using is: Working software, in the hands of customers, as quickly as possible with as low a defect rate as possible.
Another valid approach might be something like: minimizing median time to feedback.
No, I don’t believe so.
It might at first, as people who are used to driving their car straight until they hit a guard rail realize they might actually have to learn how to drive. But on the whole I believe the quality of your programming skills increase faster with the quantity of finished products more than slow reasoned debate. If that’s true then the faster you build products, the faster your skills develop.
The following story illustrates the impact of speed on quality:
The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality. His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the “quantity” group: fifty pounds of pots rated an “A”, forty pounds a “B”, and so on. Those being graded on “quality”, however, needed to produce only one pot – albeit a perfect one – to get an “A”. Well, came grading time and a curious fact emerged: the works of highest quality were all produced by the group being graded for quantity. It seems that while the “quantity” group was busily churning out piles of work – and learning from their mistakes – the “quality” group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay.
“Art & Fear” by David Bayles & Ted Orland
John Carmack approves in principle:
Similar to above but more science:
“Ok, I’m not gonna click on any of that, but how would you propose we accomplish this?” I hear you say. Well, I have some thoughts…
Stop trying to predict the future (“future proofing systems”) and just solve the problem you’re trying to solve with the minimal amount of code necessary. The current approach is hurting in the following ways:
*gasp*
I use the Rule of 3: I will copy and paste the same code twice. Once there is a third concrete need for that same code, I then feel like I have enough information that the likelihood of creating the appropriate abstraction is high.
Two years ago I quit my job and decided to strike out on my own. Since I was programming for myself, I thought there’s no harm in using a programming language I love and had used to great success in production elsewhere, and so I wrote everything in F#. I truly believe that F# is the best language on the .NET platform and has the superior programming model for eliminating entire categories of bugs (no null, for example). I believed the strong type checker would help me move more quickly than in C#.
I was wrong, but not for the reason I originally thought. For the last 6 months I’ve moved back to C# for only one reason. Tooling. All of the tooling around C# really increases your productivity with the language despite some of the issues around null, verbosity around types, etc. in F# I had to write the simple database migrations myself or otherwise jump through hoops to have a single C# project linked to my F# project so that EF Core would do the magic. The same thing is true for writing mobile apps with Xamarin. The tooling for C# ‘just works’, while for F# there’s a lot of gotchas and edge cases that you need to figure out.
The time to go from idea to working code either in the browser or on my mobile device is shorter with C# because of the tooling aiding me in writing correct robust code.
So this has changed my language stack-rank. I’ve found it’s preferable to use the ones with the best tools. Modern IDEs, debuggers, profilers, static analysis tools all help you immensely to speed up the time it takes to write correct, robust code. If you’re not using them, then you don’t even know what you’re missing. I like VIM as much as the next person, but it doesn’t hold a candle to Visual Studio or IntelliJ in terms of productivity.
Yet in almost every large project I’ve worked on, no matter the language, someone has broken the tooling. Usually the culprit is the build. They’ve put things in non-standard directories and then write their own build script to re-assemble everything in the correct place for CI… but I wish they had just taken the time to get it working for everyday developers in the standard tool.
I’ve heard arguments from the TDD crowd that although testing requires more upfront work it actually makes you go faster. This is not born out by the evidence in the book “Making Software: What Really Works, and Why We Believe It” by Andy Oram, Greg Wilson which is as close as I’ve been able to find to someone using the scientific method to measure things about software engineering processes. You can quibble about the testing methodologies of the underlying papers, but it’s still peer reviewed and published data, which is better than any blog post, including this one.
Testing absolutely helps when a project is so large you can’t keep it all in your head, but it’s not an unalloyed good. Testing makes permanent. Maybe that’s what you want, but maybe it’s not. How can we tell the difference? A maturity model of software is useful here.
I think it’s obvious on its face that not all software is created equal. Software that crashes is probably fine for a prototype of an Instagram clone – but not so much for avionics or automobiles. But the thing people seem to struggle with is the idea that different code within the same organization can be at different levels of maturity at the same time.
It doesn’t make much sense to enforce all of the quality gates that business critical software uses on prototypes or experiments. The first iteration of a new feature should be shipped to customers at prototype quality until such time it is proven valuable to customers. It is proven when people are actually using it, or better paying for it. Labels like Beta are helpful for signalling to customers that this is an experiment, allowing them to decide if they only want the stable stuff.
This minimizes mean time to feedback, and if you’ve built the wrong thing you didn’t waste 6 months making it rock solid just to have it fail in the marketplace. You’ve learned it was not viable for way cheaper. I’ve learned the hard way in my career that Field of Dreams development – “If you build it, they will come” – is NOT true.
The following three sections outline example properties of software at different maturity stages, from least mature to most mature.
Key properties include:
This software is exactly the same as the prototype software upon first release, except that it is made available to a select few external customers. The software then incrementally matures towards critical software maturity as it is more and more proven out. Properties include:
The key here is to apply this model not to the entire organization, but project by project or, even better, feature by feature.
This is not a new concept, although the above definitions are my own and are far from exhaustive. This idea is actually used by the the US Governement where they call it Capability Maturity Model (CMM). There are several version of CMM, and I am sure others have created many more.
If you have one you like, think I’ve left out your favorite property, or have any other thoughts let me know in the comments!
*You should always follow all local laws and regulations… maybe. I mean, Uber and AirBnB seem to have had their businesses impacted very little by flouting local laws but, what do I know IANAL
Recently the Department of Justice announced they were going to start investigating the large tech companies for antitrust violations, potentially leading to breaking up big tech. It’s not surprising; large tech companies love to tell the story about how they are neutral platforms or common carriers and, thus, not responsible for the content others upload. This seems incredulous. How can one be a neutral platform and have a recommendation engine that chooses what subset of the data to show? These services could and should be separate: the platform that holds and distributes the data should serve 3rd parties that compete on the best way to display that data. This model mirrors how regulators decoupled power generation and transmission to protect consumers.
Almost all of the problems with social media, from the perspective of its users, come from the recommendation engines and algorithmic feeds that amp up controversy in the name of engagement. Those engines work for the advertisers — the real customers — not the users. It’s quite possible we don’t have the right technology or the right incentives to make a single technology service that works for everybody. Even if we do, it seems unlikely that a single company will get it right. In fact, as controversy after controversy makes the news, there seems to be ample evidence that none of them have gotten their algorithms right, based on the antisocial outcomes we are seeing.
In a decoupled model it wouldn’t be up to a single company to get the algorithms right. Instead data scientists at many companies could create competing algorithms, and users could then pick and choose the view they want. This model has proven effective in other markets. The Associated Press, for example, provides a stream of news stories, and news organizations then choose which ones to publish that are best for their respective audiences.
In this model a company like Facebook would be split into two companies. One company would collect and, for a reasonable fee, distribute posts in chronological order to any company. The second company could then display those posts however they feel is best for their audience.
Twitter is already closest to this model, because they license their data through the “fire hose”.
The fire hose is a service where every time someone tweets, Twitter passes the tweet along unfiltered to the company that subscribed to the service. However, the terms of service and API updates prevent companies from using the fire hose in creating competing views of Twitter content. That’s something the DOJ could make illegal, just as they did with Microsoft in the early 2000s. They forced Microsoft to make its private APIs public so that 3rd parties (like Netscape) could compete on an even playing field.
If Facebook, Instagram and YouTube were forced to offer a service like the Twitter fire hose, one can imagine a whole slew of new innovative 3rd party services such as a ”life stream” that aggregates the updates from all the people you follow across all the platforms in one place. Parents could subscribe to a kid friendly version of YouTube, as a paid subscription, that is not trying to get you to spend more time on the site by hacking your dopamine system. Such specialized service could help parents struggling to setup some screen time boundaries for their children. Companies looking for competitive differentiation could even extend the platform to include things that users want but large tech seems deaf to, such as an edit button for tweets.
This model can generate plenty of revenue for both platform and providers. ConEd and the AP both use this model. Cable companies today make money hand over fist selling access to their pipes.
This is not to say that large tech companies shouldn’t produce their own algorithms. Rather, they should not be the only companies allowed to produce them. We need competition to bring the best services to consumers.
To be sure, some may feel that social media, and the algorithms included, already work well based on some of the positive benefits we have seen. No one doubts the role it played in the Arab Spring. It is incredible when someone has a question about rockets and both Elon Musk and John Carmack respond! But neither that interaction nor the Arab Spring depended on an algorithm to facilitate them.
We don’t have to throw out the baby with the bathwater. We can keep what’s great about these platforms while tempering the parts that induce anti-social behavior, through competing algorithms and user choice.
I decided to publish this op-ed here after Jack Dorsey wrote a tweet thread about opening up the platform. I thought this would be a good time to post. Looks like @Jack has been reading my unpublished work from this summer 😉
I think the people who liked the above tweet think I’m being funny by choosing a cake instead of a pie; that I’m suggesting that cake is superior to pie. But I’m actually being serious – Cheesecake is a pie not a cake.
When you make pumpkin pie, you make the crust, then the filling; you pour the filling into the crust and then bake it. When you make cheesecake you make the crust, and then the filling, then you pour the filling into the crust and bake it. With cake, you typically mix everything in a big bowl, pour it into a pan and bake it as is – no crust, no filling. You see, just because cheesecake is called cake doesn’t mean it actually is cake.
Words matter. They affect the way we think about things. When something is mislabeled, or we use an inappropriate metaphor, it can lead to lots of misconceptions down the road as people try to reason about it with the mental model in their head that the name invokes.
I think this has happened to Software Engineering. After 20 years in the industry, and having earned an engineering degree, I think I can now safely say that almost none of what we do is engineering. It’s writing, similar to any kind of writing that uses a team, like television writing. This mislabeling has caused a great deal of confusion for many, many years about why our industry doesn’t seem to work like any of the other engineering disciplines. But perhaps more importantly, the engineering label has limited our thinking about both the field itself and important things related to the field such as compensation models.
A personal hero, John Carmack, might agree with this. In 2012 during one of his infamous QuakeCon keynotes John Carmack said:
“… It’s nice to think of myself as a scientist/engineer sort dealing in these things that are abstract or provable or objective… and in reality, in computer science just about the only thing that’s really science is when you’re talking about algorithms and optimization is an engineering task …
… But 90% of the programmers are doing programming work to make things happen, and when I start really looking at whats going on in all of these, there really is no science and engineering and objectivity to most of these tasks …
… we like to think [that] we can be smart engineers about this, that there are objective ways to make good software. But as I’ve been looking at this more and more, it’s been striking to me how much that really isn’t the case. That aside from these things that we can measure, measure and reproduce — which is the essence of science, to be able to measure something, reproduce it, make an estimation and test that — and we get that on optimization and algorithms. But everything else we do really has nothing to do with that, and it’s about social interactions between programmers or even between yourself over time …”
https://youtu.be/wt-iVFxgFWk?t=1821
I think that if I’m honest with myself, I agree with everything he’s saying, and actually this talk is the one that made me really start thinking about this.
I think looking at software as an act of both writing and translation would explain a few things. Like why as an industry we’re notoriously off when trying to estimate how long things take. Just like fiction authors are typically over deadline. I mean how long would it take you to write a chapter of a book? How about a good chapter?
That to me sure feels the same as: How long will it take to implement feature X? What if we want it to be a ‘good’ feature?
What does it mean to write “good” code? What does it mean to write good prose? I’ve often felt that much of what people say are good coding practices are pretty arbitrary or a matter of style or personal preference. I felt the same way in English class when we talked about the rules for good writing.
“All writing takes the same amount of time. If you’ve thought a lot about what you want to say, it flows easily.”
– Jessica Hibbard, Head of Content and Community, Luminary Labs
The quote above reminds me of an argument I often hear from a certain class of programmer. They want people to stop and think more before they write code. Typically I hear this from the functional programming types, but I’ve heard it in other contexts.
Seeing software as language and writing would probably make it more appealing to women. For decades the number of women studying computer science was growing faster than the number of men. But in 1984, something changed. That thing was to make computer science appear to be more mathematically based.
The parallels between writing and programming don’t stop there. When you write a book, all of the effort is upfront and then once it’s complete you can sell an unlimited number of copies. Does that sound familiar?
That model also applies to TV shows and movies. One thing all of those industries has in common? The writers get royalties.
I think software writers should get royalties too. I have code in Visual Studio from 2010 that is still shipping today long after I’ve left Microsoft. I’ve written software for traders to manage the risks on large portfolios, more volume than any human could keep track of themselves and yet the traders received the huge bonuses and I got nothing. I wrote a good chunk of the order processing system for Jet.com, a system which processes millions of dollars in orders every day and I didn’t even get a choice about keeping my stock or not when it got sold to Walmart. All of that code is still running, producing value for those companies and BONUS they don’t have to pay me anymore since I don’t work there.
If software was seen as writing then, for those people who want to (Google employees?), they could join the writer’s guild rather than trying to unionize workplace by workplace.
What do you think? In what other ways is writing software more like writing other media? Let me know in the comments.
Update:
After a discussion over on reddit about this piece I’ve come up with even more ways in which software is like writing.
Update2:
Since people keep reading, I’ll keep writing or clarifying things as they come to me. Appreciate the comments!
Software projects can be successful without good engineering, just like books can be successful without good writing.
Following all the rules and best practices in software do not guarantee success of a software project any more than following all the rules of grammar will make your book successful. In fact I’ve never seen any published paper that shows any correlation between the two. Would love to read it if someone has it.
New ideas implemented in software (Google page rank, Uber, etc.) influence the real world as much as books like The Communist Manifesto, or The Prince did. I’d be willing to wager Excel has fundamentally changed as many economies as The Communist Manifesto did.
Video game developers seek publishers that do exactly the same thing as publishers in the book industry do.
“Good” code from 1998 (when I started) looks very different from “Good” code in 2019. “Good” writing from 1886 looks very different from “good” writing in 2019.
Some authors are better writers than others at coming up with eloquent ways to say things, just like some programmers are just better at coming up with elegant solutions.
Code reviews sure look a lot like sending your writing to an editor.
If the analogy to writing is closer than the analogy to engineering, then to become a better software writer/author one should look to the tools that writers use to become better writers rather than to other engineering disciplines.
Software being like writing would also explain (to me) why the many attempts I’ve seen to bring more rigorous, formal processes, like those in other engineering disciplines, has failed so consistently over the last 20 years. Software just doesn’t need to be an engineering discipline in order to be successful. Usually the explanation for this is that this is some new kind of engineering the world hasn’t seen before. Between the competing theories that SE is somehow a new engineering discipline that conveniently doesn’t seem to work like any other engineering discipline and the theory that it is not engineering at all. Occam’s Razor would dictate the simpler “not actually engineering” is the better theory until proven otherwise.
Now confirmed by science with brain scans and published in the ACM:
https://medicalxpress.com/news/2020-06-language-brain-scans-reveal-coding.html
I have searched high and low for a better way to use SSH key based authentication than what you learn in the default Linux tutorials. Those tutorials would have you generate one key per-machine/account and then on every box you SSH into add that to the authorized_keys
file.
What I want is a single key (that’s easy to rotate) that I can use on any machine quickly and easily to get up and running. This is because I reformat my computers a couple times a year and set them up fresh, and because I use all 3 major platforms for coding.
When you search for single key ssh authentication, the most common set of results tells you to use a Yubikey. So I finally gave in, bought a couple Yubikeys to see what all the fuss was about in the DevOps community. I did manage to work through all the tutorials about how to use a YubiKey as a kind of secure single sign on where the YubiKey is your one true universal key for SSH login. After hours and hours of setting up special daemons for forwarding SSH authentication to PGP, I’ve come to the following conclusion: YubiKeys suck.
They suffer from the same problems that all PGP and SSL encryption suffers from. Archaic tools with WAY too many options, no explanation for what any of the options do, and absolutely NO guidance around if flipping a particular set of switches makes you more or less secure.
There is a better way and I’m here to tell you about it today. The answer my friends, is Krypton.
Krypton is a phone app for iOS and Android that implements the U2F protocol and in general is a more sane approach to using a physical second factor (your phone) across all of the platforms I use: Mac, Linux and Windows1.
There are two modes:
For the first mode, you just add the Krypton plugin to your browser of choice and then Pair your phone to that specific browser using a QR code similar to how WhatsApp desktop works. After that you can use Krypton anywhere U2F is supported2
For Developer Mode: Install the kr
daemon/tool and the rest is easy peasy. You type kr pair
into a Terminal and a QR code appears in the console. Scan this with the Krypton app on your phone and you’re all set. The kr
tool takes care of setting up the SSH daemon as well as the github signing if you’d like. You still need to add an entry into authorized_keys
on every computer you’re going to be using for SSH authentication, but this time… magically… it’s the SAME key on every computer you pair Krypton with without having to manually move things around, figure out where to securely store the key (it’s in secure storage on your phone).
To get your SSH public key just type kr me
and voila!
Krypton is focused on security and as such the default settings ask you to confirm your login every time on your phone. I found that a little annoying especially when running Terraform jobs (more on this later) but luckily they allow you to customize your paranoia level per-host right from the app. I do still have it ask me every 3 hours or so to confirm, but after that it doesn’t bother me with additional prompts.
This is the first post in a series of posts about my home lab where I develop/prototype my applications. The next installment of this series is going to be how to use Terraform at home with your own setup that’s not a cloud provider.
I like listening to Cortex because they are thoughtful about how they do work and they always give me something to consider. They recently had a discussion about how New Years Resolutions are terrible and instead you should have themes for the year.
In the past I’ve been convinced that Goals are a really crappy way to do things. You set a goal, and are immediately in a mode of failure because you haven’t achieved it. Then you hit the goal and feel great for a little while, but a day or two later have to set another goal and be a failure again. It seems like you’re failing to meet your goals for much longer than you’re succeeding at them. While Goals may seem like a good way to ensure you have a growth mindset, in practice I think they have the opposite effect. Once you’ve reached a goal, you stop doing the thing you were doing that allowed you to achieve it (I did it! I ran a marathon, I can stop running now!)
Themes are better, these are a process not an outcomes. My theme for 2018 is the Year of Focus. With a theme I have a framework for making decisions that affects every choice I make. There’s no “Drink less” goal, instead I’ve cut back on alcohol consumption because hangovers are anti-focus. I’ve started trying to schedule out my days a little more so I can have long blocks of uninterrupted time to really get into flow, and I’ve tried to cut out other sources of distraction in my life.
Before I decided on the theme for 2018, in mid-late November I blocked Facebook and Twitter on all my devices. I did this after watching this video which made me realize that I would not really be missing out on much https://www.youtube.com/watch?v=3E7hkPZ-HTk and I might get back a few things I had lost – some time, ability to focus, and happiness.
The first week felt weird, I didn’t know how to get news anymore. When I felt bored, my instinct was to type Facebook or Twitter into the URL bar (and then have it blocked by some software). It forced me to think of other things to do and other ways I could be spending my time, including ways of actually seeing my friends in real life.
It worked though, I felt better, less distracted. It was amazing to me however that 6 weeks later the muscle memory of typing Facebook into a browser had not gone away and I was still doing it even though I had not been on the service for a long time.
Within 3 days Facebook realized something was up and started sending me click-bait emails “So and so commented on something”, “Someone has posted for the first time in a long time”. After 3 weeks, they started texting me! It was like an ex who drunk dials you.
As I type this I’ve begun the process of downloading all my Facebook data (photos, posts, etc.) and after that’s complete I think I’ll be removing my account permanently next week.
If you’re thinking about doing the same, I’d say that Apple News is a great platform for getting news if you don’t want to hop from site to site.
* My archive just completed, 96Mb for all of my Facebook activity including Photos over the last 10 years
A colleague of mine today sent me this link from Hackernews that explains options ownership. I think it’s great! More people need to understand this stuff, and I’m very happy this is out there. It does remind me however of how complicated everything has gotten and that always makes me ask why?
My colleague had a great observation:
I used to grant people options and I thought I understood most of it, but it seems like in an effort to get billion dollar valuations later stage companies have added all sorts of complicated conditions
When I was younger (so much younger than today) – I used to think that complexity was a sign of how smart everyone was, and I had a bit of imposter syndrome thinking I was not smart enough to be in the real world because I didn’t understand all of this stuff.
Now that I am older (perhaps wiser? tbd) I now am more confident in my intelligence and what I see instead is people hiding in complexity.. specifically in levels of abstraction. Dan Ariely did a test that I think is illustrative if not conclusive. He put 6 – $1 bills on a plate in a shared fridge in a college dorm. A week later he came back and they were all still there. He then put 6 cokes in the fridge and a few hours later they were all gone! No one would steal money (that would be wrong!), but move the level of abstraction up 1 level (a coke = $1), and people suddenly have fewer issues taking the cokes.
I see the same thing in finance. People trade all these derivatives, each one of which is another layer of abstraction away from money. Options are a layer of abstraction above shares (they abstract time), while shares are an abstraction over ownership, which is an abstraction over assets, which is an abstraction over money which is itself an abstraction. It’s a long way between that awesome trade you made and the person paying real money for their mortgage whom you just screwed over. It enables them to do some shady things they would never do to a real person (they would never go into a real person’s wallet and take their money physically).
I see the same with private companies and stock, stock options, etc. I think people just assume they are not smart enough to understand, they just hear stories that stock is what you want, that’s how you get rich! But I agree with my colleague above, people are using levels of abstraction to confuse people so they can play games.
Once upon a time a stock would pay dividends because as a partial owner in a company you were entitled to a partial share of the profits. That dividend was that share of the profit! You could then turn around and use that dividend to buy more shares, and then the next time it was paid you’d get even more money! This is essentially the proof Benjamin Graham used to show that stocks would outstrip bonds back in The Intelligent Investor and kicked off the era of value investing where calculating the price a company should be trading at was based on the profitability of the company and the number of shares outstanding. Until the 1980s.. where you can blame Microsoft for coming up with the idea of “growth” companies that don’t pay dividends but instead pay back their shareholders with the growth of the value of the share price. Where it’s worth more because more people want it? <– This is some people’s definition of value, but I can’t use that to make a prediction about what the share price should be and so to me this is useless
The following are a set of thoughts I’ve had watching a company grow from 60 people to 600 people – it’s not a complete thesis, but I wanted to put it out there to start getting feedback from people.
Processes are sets of rules in the same way that computer programs are just sets of rules. In the case of programs, it’s the computer doing the work of executing the rules of the program and they have no choice but to take what you wrote literally. In the case of processes, it’s people executing them. However, both have bugs – unexpected outcomes of the rules. With humans, you get some leeway because you can explain what you meant or you can say we’re implementing the ‘intent’ rather than what’s actually written but you will run into people who execute policies like they are computer programs and follow exactly what’s written, as opposed to what is intended.
Making matters worse, policies often have very long feedback loops before the bugs are detected and can be addressed. As such I think we should try and avoid process until absolutely necessary. Too many people want to rush to create a process every time someone makes a mistake so that mistake can never happen again, without regard for the bugs that can be introduced as a result.
Why create rules at all? I think this is the same as asking why society has laws and I think the reason is because you can’t trust people to do the right thing. More specifically, there’s a limit to the number of people whose reputation you can keep in your head at any one time. This limit is called the Dunbar number, named after the social scientist who studied why tribes separate into two tribes. He found that after about ~120 relationships, we can no longer keep track of who owes whom money, who is trust worthy, who likes to short change people, etc. Rules are an abstraction over behavior. If we all agree to follow the rules, we can use them as a short cut for knowing someone’s reputation. The reason I can go to the store and buy a bacon, egg and cheese from a complete stranger is because I have trust in the rules for proper food storage and preparation.
I think that organizations that are growing don’t need any policies until they reach this 120 size. After that we start seeing faces around the office that we don’t recognize, and we hear about projects starting up being lead by people we have never heard of. People who’s reputations we don’t know/trust.
So what do we do? Policies (and laws) are really useful abstractions. They allow us to trust each other without actually knowing the individual’s reputation the same way I trust the food cart guy is not going to poison me because of the FDA. However, policies like all rules – have bugs.
The #1 predictor of bugs in code is the number of lines of code. Each line is a little rule, and the more rules you have the more likely you are to introduce bugs. Since policies are rules that often contain more rules (the entire workflow is a set of rules to follow), the more policies you have (or the more complicated they are) the more likely they are to introduce bugs, and so the goal is to have the smallest set of effective policies possible.
How do you accomplish this?
Developers find bugs by compiling and executing their code, thus seeing that it does not do quite what they expected. It’s a tight feedback loop that allows them to identify and resolve bugs quickly. What we need from policies is a similar feedback loop and the easiest way to do that is with this 1 weird trick.
Make people feel the consequences of their decisions.
That’s it. Ok maybe the golden rule is nice too, do unto others as you would have them do to you is probably always a good rule.
What does this mean? It means one can never make a rule for someone else, that they themselves don’t have to follow. The reason is so they can get immediate feedback on both the good and the bad of the rule and make adjustments accordingly. This is remarkably difficult in practice. People really really don’t like feeling the pain of their decisions, and we set up all sorts of elaborate systems to protect ourselves from get that feedback. I don’t think anyone making other people’s lives hell on purpose, we do this almost subconsciously.
Paul Grahm has decided to take up the old torch of more H1B immigration because “there are not enough great programmers”. In the second paragraph he says that people who disagree with him are “anti-immigration” people who don’t understand the difference between good and great programmers.
I’m all for completely open immigration; let people who want to work do so wherever they’d like. However, I am tired of hearing this false rationale that “it’s because there are not enough good programmers”. All I’m asking is for people who make this argument, not to base it on provably false accusations/assumptions. Make an economic argument for completely open borders. Talk about lifting the employer restrictions for H1B… but when you do it the way Paul has done, it’s completely transparent that what you want is NOT that.. what you want are cheaper programmers who can’t leave your company when you abuse them, or find a higher paying job elsewhere. You want indentured servants, and it’s unethical and gross to me.
Is it really too much to ask that people base their opinions on evidence (data rather than anecdote)? The problem is the evidence doesn’t support the “not enough great programmers” claim:
A great meta-analysis type article that looks at several studies with links to each and a description of the pros and cons of the data: http://spectrum.ieee.org/at-work/education/the-stem-crisis-is-a-myth
The most recent raw data I’ve seen on the subject: http://www.epi.org/publication/bp359-guestworkers-high-skill-labor-market-analysis/
But while we’re talking about immigration, I’ve always wondered why it’s so important that the developers be great? What about great business people? Where’s the call for H1B CEOs? Why is the onus of failed startups that they couldn’t get enough *great* developers, as opposed to the mediocre business idea that failed in the market?
Why do I never hear this argument for immigration? Why is it only STEM?
Eric Sink is disturbed by the tone of people’s reaction but I think it’s perfectly reasonable for people to be upset when someone starts off accusing anyone who disagrees with him as being anti-immigration, or questioning their knowledge.
I saw this tweet today
I’ve been confused for a long time about why this isn’t how modern startups are run. This is the exact model I had in my head in 2001 when I wanted to start my own business, and for every startup idea I’ve had. However, I’ve seen so many multi-billion dollar valuations of companies that essentially have no revenue that I’m starting to wonder…
Is this a class thing?
I’ve gotten to know a few rich people in NY, and I can tell you that none of them thinks this way. I can’t tell if they’re rich because they see the world differently from other people (I sure see the world differently than my parents and have a lot more money) or are these people crazy and excited about no revenue business models because they don’t have to worry about making money?
It is difficult for me to maintain my view of the world (businesses should make money) with the data I’m getting about 0 revenue businesses being valued in the billions. I’ve heard it put this way: Having zero revenue is great because it allows you to sell the dream of, when we monazite, just imagine how much money we’ll make. Where as having a single dollar of revenue changes the conversation to why do you only have $1? And then the dream is *poof* gone when it’s confronted by actual data” (mostly I’ve heard this from Felix Salmon) This makes me think that it’s about duping people and not about creating value at all. :-/
Thoughts?