As engineers we are often given a list of requirements from someone else. We just accept them and start trying to design. We trust that other people know what they’re doing in part because our entire educational career we could never question the premise of a problem. The teacher says “Solve this” and you solve it – you don’t get to say “This is a dumb question” (Unless you’re fine failing as I was with Prof. Dempsey in Calc 3)
Musk goes on to say that in order to allow people to question the requirements, there’s another rule that goes along with this one.
1.a All requirements must have a person associated with them, not a department.
You can have a reasoned argument with a person but try arguing with a “department” and you’ll soon find yourself in a Dilbert Cartoon.
2. Delete the part or process
If you are not occasionally (say 10% of them time) adding things back in, then you aren’t deleting enough.
3. Simplify or Optimize
The most common error of a smart engineer is to optimize a thing that should not exist.
Once you have less dumb requirements, and you’ve eliminated as much as you can from the design only then try and optimize what’s left. These first 3 rules are really trying to get at one problem. The problem of really smart engineers trying to optimize the system they’ve been given rather than thinking about things from base principles and then optimizing. We tend to jump to optimization.
Only once all the other steps are complete, only then should we then try and automate a process. Doing it this way unlocks a few really nice properties. Automation takes longer to setup and get working but pays dividends in the future. However, those dividends only come if you’ve automated the right thing.
One of the things I find the most difficult to teach is the discipline to solve exactly the problem that is in front of you, and only that problem. Solve it in the simplest way you can think of. Be confident that in the future if new requirements arise, such as improved performance, you’ll be able to make a “better” implementation to account for the new requirements. Then move on to the next real problem.
I think this is kind of like the joke that if you ask a lawyer if they know the time, they’ll say “Yes”.
Another way to say this might be: Solve real problems, not imaginary ones. It can be difficult to tell the difference. Our brains are equipped with a simulation engine that allows us to imagine things that aren’t really there. Imagine a sparrow with orange stripes. If you don’t suffer from aphantasia then you likely saw exactly what was described. What you might not have noticed is that while you saw the bird, the real world blanked out. The simulation engine takes over our real senses to run its simulation. You can imagine something scary and you will be scared even though it is not real.1 In this same way the problems you think of can feel real. This is an illusion and the way to combat that is to say you’ll only work on problems that have real evidence, a real example, an actual data set, not imagined scenarios.
Ask yourself: “Is this requirement real or is it just a thought?“
“Yes, but you only have to implement it once! So you should take the extra time to do a good job!”
That assumes you’ve correctly predicted the future and the code is used exactly the way you thought it would be. If you have this ability and care at all about money, you should be playing the stock market and not writing code for other people. I have learned the hard way that I do not. I’ve implemented complex solutions that took a long time to write only to discover that some requirement I didn’t think of has broken it completely effectively requiring a redesign to handle the new scenario.
This is how I code. I work this way because I’ve discovered that I’m not particularly good at predicting the future. I can imagine all kinds of problems that are not real, they’re just thoughts. Trying to account for all of these imagined problems leads to very complex solutions. Complex solutions are difficult to reason about, difficult to implement correctly, take more time to implement than simpler or more naive solutions. They are more difficult to debug and fragile to changing requirements (the ones you didn’t imagine).
Practically what I’ve found is that sometimes the naive implementation is all you need. It’s sufficient for the problem sizes you’re dealing with, and you saved yourself a ton of time by not implementing something more complicated with better performance. You were able to move on to add more value to the project quickly because you implemented an N^2 algorithm in a hour instead of taking a week to write a much more complex solution. And it was fine. If it isn’t fine, then you’ll know that pretty quickly, and you’ll have actual evidence to justify taking the time to create a more complex implementation. You can point to the real data you’re trying to process and say “this is the reason I need more time”.
I am starting to wonder if this is something that can even be taught. Is this a conclusion one needs to come to on their own? Through their own experiences. After all, that is how I have come to this philosophy.
I have, on several occasions, had more work than I could possibly do under tight deadline. The argument that deadlines are arbitrary falls on deaf ears when you are burning cash and have no product to sell. Working code today is better than perfect code tomorrow. What good is perfectly written code that nobody uses? Nobody uses it either because it wasn’t what they actually needed to solve their problem (you predicted wrong, but spent a long time implementing the wrong thing), or worse your company no longer exists because it blew through all it’s cash before you could make a single sale.
While we could go back and forth about the benefits of a strict adherence to any programming philosophy: Static vs Dynamic, Agile vs Big Design Up Front, Types vs Tests, DDD, CQRS, TDD, etc. One thing I will say is that if you want to get better at this discipline, I think that some amount of Test Driven Development is the best way to practice this.
All the information above was taken from https://www.nycgovparks.org/facilities/dogareas and encoded to the best of my abilities. I’ve tried to encode the rules for the specific park in the description when you click on the pin but if you know more specific details let me know in the comments and I’ll do my best to keep this map up to date. Enjoy!
I’ve started a serious mindfulness practice and it’s been really great. I’ve been practicing on and off for years, but my wife has never really given it a go. She, like many, doesn’t really feel anything or understand what it is they’re supposed to feel. What does success look like?
Different analogies work better for different people. The one that has worked particularly well for me has been two parts.
Your thoughts can come and go, and you can observe them separately from interacting with them, like sitting on a park bench watching people walk past.
When you inevitably get lost in thought, you can bring yourself back gently to the practice like taming a wild horse. Slowly, pulling in further as the thoughts circle around you.
This didn’t work so well for my wife, but recently we’ve stumbled on a breakthrough. Your brain is like Twitter.
You can imagine your mind like the Twitter timeline. You don’t really have much control over what tweets appear, save for a little bit of signal based on who you follow initially. Your thoughts are the same, you can control your environment and what information you’re exposed to – but which thoughts show up, you have little control over.
Further, like the Twitter timeline you have a choice. You can look at each Tweet and let it scroll past or you can engage with it. When you engage with a Tweet by clicking like, retweet or commenting that signals to the Twitter algorithm (which is optimizing for engagement, not what you’d actually like to see) that it should show you more like that. Again, the analogy holds for thoughts. Thoughts appear in your head and you can look at/observe them without engaging further. Or you can engage and your mind is likely to show you more of the same.
If something is making you angry, and you think you can stay angry without your mind constantly generating reasons why you’re perfectly justified to be angry, you are mistaken. If something your timeline is making you angry… Ok, I think you get it.
I always pick on Twitter on this blog, but the same can be said for any social media with an algorithmic controlled timeline. Facebook, Instagram, etc.
I think ultimately this is exactly why these algorithms are so addictive. Because they mimic the natural thought process so incredibly well. Just like opioids are addictive because they mimic endorphins. They’re not the same, but tell that to your brain.
Once upon a time in 2011, I serendipitously stumbled on to a question on a stack exchange site I had never been to, and would never visit again. The title was so click-baity I just had to click!
It was another programmer asking for help: Why am I not motivated in this excellent situation? As it happened, I had been casually studying this exact topic for the past few years and thought I knew the answer.
The personal productivity stack exchange has since gone away but I want this question to live on here for two reasons:
First, it gets to the heart of something that I think is counter intuitive (or at least counter narrative) about human motivation and so I think the information might be useful to others searching for their own answers to this question.
Second, since I’ve read the details of this question it has continued to haunt me. I think about this question all the time. Until David asked this I had never considered the business model proposed – and now it’s all I think about. It has lead me to a deep seeded belief that programmers should be getting royalties, and backwards from that, that programming is more like writing than engineering.
Question
Why am I not motivated in this excellent situation?
I am working as a freelance contractor. For a long time I have been paid by the hour. This has worked fine and my motivation has never been a problem. Now, I have gotten a deal where I get half oa ny increased profit that are due to my actions/ideas etc. This is an excellent deal, which would most likely raise my income ten-fold.
My problem, however, is that the deal caused me to completely lose my motivation. Meaning that I have literally not done any meaningful work for them for about three months. Why is that? How could such an excellent deal cause me to loose motivation? I am after understanding this to depth so please only answer if you have specific references.
David
Answer
As silly as it sounds, getting paid more money actually DECREASES performance for non-trivial tasks (see references). This problem has been studied a lot in behavioral economics, and psychology.
The problem is one of Extrinsic Motivation replacing Intrinsic Motivation.
Intrinsic motivation is your innate desire to do a good job. It’s what you feel when you’re working on something you want to be working on because you yourself want to see the project completed. Working on hobbies, or learning new non-work related skills are examples of intrinsic motivation.
Extrinsic motivation is when you receive something in exchange for your efforts. When you are paid to do some job, or when you receive a grade in school.
Intrinsic motivation is much more powerful, people who want to do a good job often produce much
better work than people who are merely getting paid to accomplish a task. The terrible thing is that our
brains are wired to replace intrinsic motivation with extrinsic motivation almost at the drop of a hat.
These people explain what’s happening far better than I can:
“But when you offer people money to do things that they wanted to do, anyway, they suffer from something called the Overjustification Effect. “I must be writing bug-free code because I like the money I get for it,” they think, and the extrinsic motivation displaces the intrinsic motivation. Since extrinsic motivation is a much weaker effect, the net result is that you’ve actually reduced their desire to do a good job. When you stop paying the bonus, or when they decide they don’t care that much about the money, they no longer think that they care about bug free code.”
I’ve tried to keep the formatting and wording of the original as accurate as possible. I never found out what happened to David; how this deal worked out in the end. He left a very touching comment at the time: “Thank you, I will use this information everyday for the rest of my life”. I wish I knew how to get in touch with him to tell him how his question has also changed my fundamental understanding of software engineering, and thus my life too.
I can probably use this like Twitter and relieve myself of the last social media I am addicted to.
Honestly social media is really just centralized blogs with RSS and a share button.
This is by far the topic I get the most push back from engineers about. Even more than Software Engineering is Writing. They always want to go slower, more carefully. The idea of doing what they’re doing now but faster seems crazy. It is crazy. You can’t go faster doing what you’re doing now. Instead I want us to change what we’re doing to go faster.
Why should we be trying to go faster? In short, because we’re professionals.
Professionals vs Hobbyist
Activity
Professional
Hobbyist
Driving
200 MPH
65 MPH
Football QB decision to pass or run
3 sec
10 sec
Cleaning a 1br Apartment
4 hr
1 day
Baseball fastball
90 MPH
50 MPH
Programming – implementing a single screen of a prototype UI
1 month
2-3 days
An almost universal difference between a professional and a hobbyist in every job is speed. Professionals are faster than amateurs.
Most of the above data comes from the book “Wait: The Art and Science of Delay” by Frank Partnoy. The title is an interesting choice as the main argument I get out of the book is: Focus on speed. Speed allows you to wait longer to take action. Waiting longer allows you to gather more information, which increases the likelihood of a better outcome. It reminds me of the Special Forces phrase: “Slow is smooth and smooth is fast”.
The last line in the table comes from a friend at Google. They told me this is how long it takes them to make a single prototype screen for a UI. I nearly spit out my coffee. I chose this because the majority of programmers I talk with are Googlepheliacs who think everything they do is “the way” and should be emulated always, everywhere.
Speed of what?
If you want a metric, the one I’m currently using is: Working software, in the hands of customers, as quickly as possible with as low a defect rate as possible.
Another valid approach might be something like: minimizing median time to feedback.
But won’t quality suffer?
No, I don’t believe so.
It might at first, as people who are used to driving their car straight until they hit a guard rail realize they might actually have to learn how to drive. But on the whole I believe the quality of your programming skills increase faster with the quantity of finished products more than slow reasoned debate. If that’s true then the faster you build products, the faster your skills develop.
The following story illustrates the impact of speed on quality:
The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality. His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the “quantity” group: fifty pounds of pots rated an “A”, forty pounds a “B”, and so on. Those being graded on “quality”, however, needed to produce only one pot – albeit a perfect one – to get an “A”. Well, came grading time and a curious fact emerged: the works of highest quality were all produced by the group being graded for quantity. It seems that while the “quantity” group was busily churning out piles of work – and learning from their mistakes – the “quality” group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay.
“Ok, I’m not gonna click on any of that, but how would you propose we accomplish this?” I hear you say. Well, I have some thoughts…
Work on the problem in front of you
Stop trying to predict the future (“future proofing systems”) and just solve the problem you’re trying to solve with the minimal amount of code necessary. The current approach is hurting in the following ways:
We’re all bad at predicting the future, so what you write may never be necessary.
It takes more time to write the future proof version of the code.
Creating flexibility requires abstraction. This abstraction is implemented through indirection. Every layer of indirection adds cognitive load that is unnecessary and unrelated to the task (sometimes called incidental complexity).
Good abstractions are difficult to make correctly and require lots of examples to do well.
Repeat Yourself
A little bit of code duplication is better than maintaining 20+ abstract types.
I use the Rule of 3: I will copy and paste the same code twice. Once there is a third concrete need for that same code, I then feel like I have enough information that the likelihood of creating the appropriate abstraction is high.
Don’t Break Tools!
Two years ago I quit my job and decided to strike out on my own. Since I was programming for myself, I thought there’s no harm in using a programming language I love and had used to great success in production elsewhere, and so I wrote everything in F#. I truly believe that F# is the best language on the .NET platform and has the superior programming model for eliminating entire categories of bugs (no null, for example). I believed the strong type checker would help me move more quickly than in C#.
I was wrong, but not for the reason I originally thought. For the last 6 months I’ve moved back to C# for only one reason. Tooling. All of the tooling around C# really increases your productivity with the language despite some of the issues around null, verbosity around types, etc. in F# I had to write the simple database migrations myself or otherwise jump through hoops to have a single C# project linked to my F# project so that EF Core would do the magic. The same thing is true for writing mobile apps with Xamarin. The tooling for C# ‘just works’, while for F# there’s a lot of gotchas and edge cases that you need to figure out.
The time to go from idea to working code either in the browser or on my mobile device is shorter with C# because of the tooling aiding me in writing correct robust code.
So this has changed my language stack-rank. I’ve found it’s preferable to use the ones with the best tools. Modern IDEs, debuggers, profilers, static analysis tools all help you immensely to speed up the time it takes to write correct, robust code. If you’re not using them, then you don’t even know what you’re missing. I like VIM as much as the next person, but it doesn’t hold a candle to Visual Studio or IntelliJ in terms of productivity.
Yet in almost every large project I’ve worked on, no matter the language, someone has broken the tooling. Usually the culprit is the build. They’ve put things in non-standard directories and then write their own build script to re-assemble everything in the correct place for CI… but I wish they had just taken the time to get it working for everyday developers in the standard tool.
Testing and Other Quality Control
I’ve heard arguments from the TDD crowd that although testing requires more upfront work it actually makes you go faster. This is not born out by the evidence in the book “Making Software: What Really Works, and Why We Believe It” by Andy Oram, Greg Wilson which is as close as I’ve been able to find to someone using the scientific method to measure things about software engineering processes. You can quibble about the testing methodologies of the underlying papers, but it’s still peer reviewed and published data, which is better than any blog post, including this one.
Testing absolutely helps when a project is so large you can’t keep it all in your head, but it’s not an unalloyed good. Testing makes permanent. Maybe that’s what you want, but maybe it’s not. How can we tell the difference? A maturity model of software is useful here.
A Maturity Model of Software
I think it’s obvious on its face that not all software is created equal. Software that crashes is probably fine for a prototype of an Instagram clone – but not so much for avionics or automobiles. But the thing people seem to struggle with is the idea that different code within the same organization can be at different levels of maturity at the same time.
It doesn’t make much sense to enforce all of the quality gates that business critical software uses on prototypes or experiments. The first iteration of a new feature should be shipped to customers at prototype quality until such time it is proven valuable to customers. It is proven when people are actually using it, or better paying for it. Labels like Beta are helpful for signalling to customers that this is an experiment, allowing them to decide if they only want the stable stuff.
This minimizes mean time to feedback, and if you’ve built the wrong thing you didn’t waste 6 months making it rock solid just to have it fail in the marketplace. You’ve learned it was not viable for way cheaper. I’ve learned the hard way in my career that Field of Dreams development – “If you build it, they will come” – is NOT true.
The following three sections outline example properties of software at different maturity stages, from least mature to most mature.
Prototype/Experimental Software
Key properties include:
Customers are internal initially
Speed to customers/mean time to feedback is the most important metric
This is more important than code coverage metrics
Few automated tests or build guarantees
It takes time to write tests, but even MORE time to unwrite them when you realize you’ve made the wrong thing. Make sure you’ve built the right thing first, then add automated tests once that is proven.
Doesn’t need to follow the style guide yet if the guidelines propose onerous requirements for uses of frameworks, OOP design patterns, etc.
If it’s easier to access the database without the ORM, then do that
No onerous secrets management or storage
If it is a feature worth keeping then rotate the keys when that’s proven. Until then, check them into the repo. These keys have been designed to be easy to rotate, so stop worrying about making it so they never get leaked and instead focus on making them easy to change, should that happen
No encrypted network traffic
TLS and Secrets Management combine to make the day to day lives of developers a living nightmare. They aren’t necessary for a feature that hasn’t shipped to customers yet, so stop shooting yourselves in the foot requiring every developer to jump through 20 hoops before they can get a single line of code working.
Performance not an issue
Get it working correctly first in the simplest way possible. You might not actually need more, and if you do you’ll figure that out too.
Beta Software
This software is exactly the same as the prototype software upon first release, except that it is made available to a select few external customers. The software then incrementally matures towards critical software maturity as it is more and more proven out. Properties include:
Performance optimizations for hot paths
Begin adding automated tests for features that have market fit
Remove or rewrite features that no one is using based on data
Implement stricter security
Accessibility concerns are addressed in this phase
Improved logging and error handling
Critical Software
This is the software that makes you money, and when it goes down it affects revenue and presumably your bonus/stock valuation
Strong automated testing with CI/CD gates
Blue/Green deployment controls
Logging
Secrets management
TLS encryption on network communications
PII Encrypted at rest*
Good Security hygiene
Accessible
The key here is to apply this model not to the entire organization, but project by project or, even better, feature by feature.
This is not a new concept, although the above definitions are my own and are far from exhaustive. This idea is actually used by the the US Governement where they call it Capability Maturity Model (CMM). There are several version of CMM, and I am sure others have created many more.
If you have one you like, think I’ve left out your favorite property, or have any other thoughts let me know in the comments!
*You should always follow all local laws and regulations… maybe. I mean, Uber and AirBnB seem to have had their businesses impacted very little by flouting local laws but, what do I know IANAL
Recently the Department of Justice announced they were going to start investigating the large tech companies for antitrust violations, potentially leading to breaking up big tech. It’s not surprising; large tech companies love to tell the story about how they are neutral platforms or common carriers and, thus, not responsible for the content others upload. This seems incredulous. How can one be a neutral platform and have a recommendation engine that chooses what subset of the data to show? These services could and should be separate: the platform that holds and distributes the data should serve 3rd parties that compete on the best way to display that data. This model mirrors how regulators decoupled power generation and transmission to protect consumers.
Almost all of the problems with social media, from the perspective of its users, come from the recommendation engines and algorithmic feeds that amp up controversy in the name of engagement. Those engines work for the advertisers — the real customers — not the users. It’s quite possible we don’t have the right technology or the right incentives to make a single technology service that works for everybody. Even if we do, it seems unlikely that a single company will get it right. In fact, as controversy after controversy makes the news, there seems to be ample evidence that none of them have gotten their algorithms right, based on the antisocial outcomes we are seeing.
In a decoupled model it wouldn’t be up to a single company to get the algorithms right. Instead data scientists at many companies could create competing algorithms, and users could then pick and choose the view they want. This model has proven effective in other markets. The Associated Press, for example, provides a stream of news stories, and news organizations then choose which ones to publish that are best for their respective audiences.
In this model a company like Facebook would be split into two companies. One company would collect and, for a reasonable fee, distribute posts in chronological order to any company. The second company could then display those posts however they feel is best for their audience.
Twitter is already closest to this model, because they license their data through the “fire hose”.
The fire hose is a service where every time someone tweets, Twitter passes the tweet along unfiltered to the company that subscribed to the service. However, the terms of service and API updates prevent companies from using the fire hose in creating competing views of Twitter content. That’s something the DOJ could make illegal, just as they did with Microsoft in the early 2000s. They forced Microsoft to make its private APIs public so that 3rd parties (like Netscape) could compete on an even playing field.
If Facebook, Instagram and YouTube were forced to offer a service like the Twitter fire hose, one can imagine a whole slew of new innovative 3rd party services such as a ”life stream” that aggregates the updates from all the people you follow across all the platforms in one place. Parents could subscribe to a kid friendly version of YouTube, as a paid subscription, that is not trying to get you to spend more time on the site by hacking your dopamine system. Such specialized service could help parents struggling to setup some screen time boundaries for their children. Companies looking for competitive differentiation could even extend the platform to include things that users want but large tech seems deaf to, such as an edit button for tweets.
This model can generate plenty of revenue for both platform and providers. ConEd and the AP both use this model. Cable companies today make money hand over fist selling access to their pipes.
This is not to say that large tech companies shouldn’t produce their own algorithms. Rather, they should not be the only companies allowed to produce them. We need competition to bring the best services to consumers.
To be sure, some may feel that social media, and the algorithms included, already work well based on some of the positive benefits we have seen. No one doubts the role it played in the Arab Spring. It is incredible when someone has a question about rockets and both Elon Musk and John Carmack respond! But neither that interaction nor the Arab Spring depended on an algorithm to facilitate them.
We don’t have to throw out the baby with the bathwater. We can keep what’s great about these platforms while tempering the parts that induce anti-social behavior, through competing algorithms and user choice.
I decided to publish this op-ed here after Jack Dorsey wrote a tweet thread about opening up the platform. I thought this would be a good time to post. Looks like @Jack has been reading my unpublished work from this summer 😉
Second, the value of social media is shifting away from content hosting and removal, and towards recommendation algorithms directing one’s attention. Unfortunately, these algorithms are typically proprietary, and one can’t choose or build alternatives. Yet.
I think the people who liked the above tweet think I’m being funny by choosing a cake instead of a pie; that I’m suggesting that cake is superior to pie. But I’m actually being serious – Cheesecake is a pie not a cake.
When you make pumpkin pie, you make the crust, then the filling; you pour the filling into the crust and then bake it. When you make cheesecake you make the crust, and then the filling, then you pour the filling into the crust and bake it. With cake, you typically mix everything in a big bowl, pour it into a pan and bake it as is – no crust, no filling. You see, just because cheesecake is called cake doesn’t mean it actually is cake.
Words matter. They affect the way we think about things. When something is mislabeled, or we use an inappropriate metaphor, it can lead to lots of misconceptions down the road as people try to reason about it with the mental model in their head that the name invokes.
I think this has happened to Software Engineering. After 20 years in the industry, and having earned an engineering degree, I think I can now safely say that almost none of what we do is engineering. It’s writing, similar to any kind of writing that uses a team, like television writing. This mislabeling has caused a great deal of confusion for many, many years about why our industry doesn’t seem to work like any of the other engineering disciplines. But perhaps more importantly, the engineering label has limited our thinking about both the field itself and important things related to the field such as compensation models.
A personal hero, John Carmack, might agree with this. In 2012 during one of his infamous QuakeCon keynotes John Carmack said:
“… It’s nice to think of myself as a scientist/engineer sort dealing in these things that are abstract or provable or objective… and in reality, in computer science just about the only thing that’s really science is when you’re talking about algorithms and optimization is an engineering task …
… But 90% of the programmers are doing programming work to make things happen, and when I start really looking at whats going on in all of these, there really is no science and engineering and objectivity to most of these tasks …
… we like to think [that] we can be smart engineers about this, that there are objective ways to make good software. But as I’ve been looking at this more and more, it’s been striking to me how much that really isn’t the case. That aside from these things that we can measure, measure and reproduce — which is the essence of science, to be able to measure something, reproduce it, make an estimation and test that — and we get that on optimization and algorithms. But everything else we do really has nothing to do with that, and it’s about social interactions between programmers or even between yourself over time …”
I think that if I’m honest with myself, I agree with everything he’s saying, and actually this talk is the one that made me really start thinking about this.
What would it mean to be a Software Writer rather than a Software Engineer?
Writing and Translation
I think looking at software as an act of both writing and translation would explain a few things. Like why as an industry we’re notoriously off when trying to estimate how long things take. Just like fiction authors are typically over deadline. I mean how long would it take you to write a chapter of a book? How about a good chapter?
That to me sure feels the same as: How long will it take to implement feature X? What if we want it to be a ‘good’ feature?
What does it mean to write “good” code? What does it mean to write good prose? I’ve often felt that much of what people say are good coding practices are pretty arbitrary or a matter of style or personal preference. I felt the same way in English class when we talked about the rules for good writing.
“All writing takes the same amount of time. If you’ve thought a lot about what you want to say, it flows easily.”
– Jessica Hibbard, Head of Content and Community, Luminary Labs
The quote above reminds me of an argument I often hear from a certain class of programmer. They want people to stop and think more before they write code. Typically I hear this from the functional programming types, but I’ve heard it in other contexts.
Seeing software as language and writing would probably make it more appealing to women. For decades the number of women studying computer science was growing faster than the number of men. But in 1984, something changed. That thing was to make computer science appear to be more mathematically based.
Compensation
The parallels between writing and programming don’t stop there. When you write a book, all of the effort is upfront and then once it’s complete you can sell an unlimited number of copies. Does that sound familiar?
That model also applies to TV shows and movies. One thing all of those industries has in common? The writers get royalties.
I think software writers should get royalties too. I have code in Visual Studio from 2010 that is still shipping today long after I’ve left Microsoft. I’ve written software for traders to manage the risks on large portfolios, more volume than any human could keep track of themselves and yet the traders received the huge bonuses and I got nothing. I wrote a good chunk of the order processing system for Jet.com, a system which processes millions of dollars in orders every day and I didn’t even get a choice about keeping my stock or not when it got sold to Walmart. All of that code is still running, producing value for those companies and BONUS they don’t have to pay me anymore since I don’t work there.
If software was seen as writing then, for those people who want to (Google employees?), they could join the writer’s guild rather than trying to unionize workplace by workplace.
What do you think? In what other ways is writing software more like writing other media? Let me know in the comments.
Update:
After a discussion over on reddit about this piece I’ve come up with even more ways in which software is like writing.
Update2:
Since people keep reading, I’ll keep writing or clarifying things as they come to me. Appreciate the comments!
More Similarities to writing
Software projects can be successful without good engineering, just like books can be successful without good writing.
Following all the rules and best practices in software do not guarantee success of a software project any more than following all the rules of grammar will make your book successful. In fact I’ve never seen any published paper that shows any correlation between the two. Would love to read it if someone has it.
New ideas implemented in software (Google page rank, Uber, etc.) influence the real world as much as books like The Communist Manifesto, or The Prince did. I’d be willing to wager Excel has fundamentally changed as many economies as The Communist Manifesto did.
Video game developers seek publishers that do exactly the same thing as publishers in the book industry do.
“Good” code from 1998 (when I started) looks very different from “Good” code in 2019. “Good” writing from 1886 looks very different from “good” writing in 2019.
Some authors are better writers than others at coming up with eloquent ways to say things, just like some programmers are just better at coming up with elegant solutions.
Code reviews sure look a lot like sending your writing to an editor.
Consequences if true
If the analogy to writing is closer than the analogy to engineering, then to become a better software writer/author one should look to the tools that writers use to become better writers rather than to other engineering disciplines.
Software being like writing would also explain (to me) why the many attempts I’ve seen to bring more rigorous, formal processes, like those in other engineering disciplines, has failed so consistently over the last 20 years. Software just doesn’t need to be an engineering discipline in order to be successful. Usually the explanation for this is that this is some new kind of engineering the world hasn’t seen before. Between the competing theories that SE is somehow a new engineering discipline that conveniently doesn’t seem to work like any other engineering discipline and the theory that it is not engineering at all. Occam’s Razor would dictate the simpler “not actually engineering” is the better theory until proven otherwise.
Update 6/14/2020
Now confirmed by science with brain scans and published in the ACM: