Author: Jim Wallace

  • How to Sleep

    For most of my life I’ve had a problem. I have always had trouble getting to sleep, sometimes leading to being in bed, tired, but still quite awake way later than I’d like. Over the years I’ve worked hard to correct this. I’ve invested a bunch into having a healthy bedtime routine, consistent, less blue light, only use the bedroom for sleep, etc. but I am unwilling to give up on caffeine completely which is probably the culprit.

    What to do instead?

    Well the biggest problem for me was that my mind would be racing and looping over things again and again. If that sounds like you, then have I got the trick for you. Audio books.

    You see, I had realized that falling asleep with the TV would make it so my mind doesn’t race as I pay attention, but the pressure was always there to open my eyes to see what was going on. But Audio books are perfect* at giving you something to engage with, but no need to open your eyes.

    But you have to choose the right book. If a book is too compelling, you’re just as likely to stay awake all night listening. This has happened to me a couple of times. On the other hand if a book is too simple or boring then your mind will race anyway as it stops paying attention.

    What’s the perfect balance? LitRPGs.

    LitRPGs are books where the main character is embodied in a role playing video game (usually). The stories are fun and interesting, but the reading aloud of stat sheets is decidedly not which gives them a good balance.

    Here are a few that I think are great, or at least great examples of the genre:

    Classic LitRPGs

    Magic, but no explicit ties to video games

    Deck building

  • Midly

    A week ago I did something I’ve always wanted to do. I launched my first iOS App.

    Midly in action

    It’s a simple app that solves a niche problem I was having: I wanted to see my friends more in real life.

    However, as I’ve gotten older and have more responsibilities I’m finding planning small get togethers to be more cumbersome. My hope is that if it’s easier to do, I’ll do it more often.

    This app helps with that by automating a few steps that had me juggling three windows/apps to find a place to meet up.

    Step 1

    Enter all the locations where people will be coming from. Add as many as you like. They can be contacts in your contact list, businesses, street addresses, a neighborhood, etc.

    Step 2

    Press the “meet in the middle” button and get place suggestions between everyone.

    That’s it

    You don’t even need to login. It’s FREE. I don’t track any data. No plans to make it a business. There are a few more features coming as I get time to add them. Check it out. Could be useful to people here if you do a lot of coffees with folks in diffferent places. I wish I’d had this back when I was dating.

    As I write this I’ve just added another major feature that is waiting for App Store approval. The ability to save the groups you create and easily recall them in the future. This is a big step towards making this into the recurring dinner planning app of my dreams.

  • A case for minimizing moving parts in a build

    Original Tweet Thread

    Unrolled Tweet Thread

    If we take the chance that a tool (compiler, linker, batch, whatever) remains working for a particular codebase after one year as a given probability p, then the chance that build remains working after x years is pxn, where n is the number of tools used in the build.

    That’s “just math”. Even if we assume a 99% chance that a tool still works on the codebase after a year (an extreme rarity these days!), that graph looks like this:

    With just one tool, the build has an over 95% chance of working after five year. With ten tools, it has only a 60% chance! And that’s with every tool having a _99%_ chance of remaining working (meaning no breaking changes to the tool that affect the build in question).

    If we assume a still probably favorable given today’s environment, but slightly lower p of 90%, the graph now looks like this:

    This is, of course, an epic disaster. With just 3 tools used, after 5 years there is _very little chance_ that your codebase will still build correctly. And that’s at 90% and just 3 tools!

    If you look at 90%/10 tools, heaven forbid, that bottom line says your build almost certainly doesn’t work after only 2 years… and in fact has barely a 30% chance of working after just 1 year!

    Now imagine that we don’t say “tool”. We just say “dependency”. The equation _remains the same_. Modern codebases often have 10s, 100s, or even 1000s of dependencies! What does that do to this graph?

    Here is the graph of 10, 100, and 1000 dependencies, assuming a never-happens-on-github percentage chance of a dependency not breaking your build at 99%:

    10 dependencies sort-of works. It has a 60% chance of still working after 5 years. 100 dependencies doesn’t work. It’s less than 40% after just 1 year. 1000 dependencies breaks with almost complete certainty after a mere _four months_.

    All of this is already something you know intuitively. Projects with lots of dependencies never work out-of-the-box. You are constantly updating, patching, and struggling to get their builds working, because every time something downstream changes, somebody has to fix it.

    The “dependency culture” of modern programming has put us into a state where software requires perpetual, constant maintenance. No longer can we take a build and say “this works” and come back to it in a year. Great for job security, horrible for software quality.

    As for speculation, I wonder, at least in part, if this answers the questions me and other people like me have, which is how do companies like Twitter employ thousands of developers while seemingly producing almost no additional software or improvements?

    Well, if you assume that Twitter’s collective codebase is a 1000+ dependency nightmare, as I assume it probably is, then the math kind of tells us the answer: the vast, vast majority of their time will have to be spent simply keeping their existing code working.

    Casey Muratori (@cmuratori)

  • Musk’s Rules for Engineering Design

    1. Make your requirements less dumb

    As engineers we are often given a list of requirements from someone else. We just accept them and start trying to design. We trust that other people know what they’re doing in part because our entire educational career we could never question the premise of a problem. The teacher says “Solve this” and you solve it – you don’t get to say “This is a dumb question” (Unless you’re fine failing as I was with Prof. Dempsey in Calc 3)

    Musk goes on to say that in order to allow people to question the requirements, there’s another rule that goes along with this one.

    1.a All requirements must have a person associated with them, not a department.

    You can have a reasoned argument with a person but try arguing with a “department” and you’ll soon find yourself in a Dilbert Cartoon.

    2. Delete the part or process

    If you are not occasionally (say 10% of them time) adding things back in, then you aren’t deleting enough.

    3. Simplify or Optimize

    The most common error of a smart engineer is to optimize a thing that should not exist.

    Once you have less dumb requirements, and you’ve eliminated as much as you can from the design only then try and optimize what’s left. These first 3 rules are really trying to get at one problem. The problem of really smart engineers trying to optimize the system they’ve been given rather than thinking about things from base principles and then optimizing. We tend to jump to optimization.

    4. Accelerate Cycle Time

    You’re going too slow, go faster.

    5. Automate

    Only once all the other steps are complete, only then should we then try and automate a process. Doing it this way unlocks a few really nice properties. Automation takes longer to setup and get working but pays dividends in the future. However, those dividends only come if you’ve automated the right thing.

  • The Discipline of Software

    One of the things I find the most difficult to teach is the discipline to solve exactly the problem that is in front of you, and only that problem. Solve it in the simplest way you can think of. Be confident that in the future if new requirements arise, such as improved performance, you’ll be able to make a “better” implementation to account for the new requirements. Then move on to the next real problem.

    I think this is kind of like the joke that if you ask a lawyer if they know the time, they’ll say “Yes”.

    Another way to say this might be: Solve real problems, not imaginary ones. It can be difficult to tell the difference. Our brains are equipped with a simulation engine that allows us to imagine things that aren’t really there. Imagine a sparrow with orange stripes. If you don’t suffer from aphantasia then you likely saw exactly what was described. What you might not have noticed is that while you saw the bird, the real world blanked out. The simulation engine takes over our real senses to run its simulation. You can imagine something scary and you will be scared even though it is not real.1 In this same way the problems you think of can feel real. This is an illusion and the way to combat that is to say you’ll only work on problems that have real evidence, a real example, an actual data set, not imagined scenarios.

    Ask yourself: “Is this requirement real or is it just a thought?

    “Yes, but you only have to implement it once! So you should take the extra time to do a good job!”

    That assumes you’ve correctly predicted the future and the code is used exactly the way you thought it would be. If you have this ability and care at all about money, you should be playing the stock market and not writing code for other people. I have learned the hard way that I do not. I’ve implemented complex solutions that took a long time to write only to discover that some requirement I didn’t think of has broken it completely effectively requiring a redesign to handle the new scenario.

    This is how I code. I work this way because I’ve discovered that I’m not particularly good at predicting the future. I can imagine all kinds of problems that are not real, they’re just thoughts. Trying to account for all of these imagined problems leads to very complex solutions. Complex solutions are difficult to reason about, difficult to implement correctly, take more time to implement than simpler or more naive solutions. They are more difficult to debug and fragile to changing requirements (the ones you didn’t imagine).

    Practically what I’ve found is that sometimes the naive implementation is all you need. It’s sufficient for the problem sizes you’re dealing with, and you saved yourself a ton of time by not implementing something more complicated with better performance. You were able to move on to add more value to the project quickly because you implemented an N^2 algorithm in a hour instead of taking a week to write a much more complex solution. And it was fine. If it isn’t fine, then you’ll know that pretty quickly, and you’ll have actual evidence to justify taking the time to create a more complex implementation. You can point to the real data you’re trying to process and say “this is the reason I need more time”.

    I am starting to wonder if this is something that can even be taught. Is this a conclusion one needs to come to on their own? Through their own experiences. After all, that is how I have come to this philosophy.

    I have, on several occasions, had more work than I could possibly do under tight deadline. The argument that deadlines are arbitrary falls on deaf ears when you are burning cash and have no product to sell. Working code today is better than perfect code tomorrow. What good is perfectly written code that nobody uses? Nobody uses it either because it wasn’t what they actually needed to solve their problem (you predicted wrong, but spent a long time implementing the wrong thing), or worse your company no longer exists because it blew through all it’s cash before you could make a single sale.

    While we could go back and forth about the benefits of a strict adherence to any programming philosophy: Static vs Dynamic, Agile vs Big Design Up Front, Types vs Tests, DDD, CQRS, TDD, etc. One thing I will say is that if you want to get better at this discipline, I think that some amount of Test Driven Development is the best way to practice this.

    Notes:
    1. The information about the brain’s simulation engine can be found in the book Stumbling on Happiness by Daniel Gilbert of Harvard University

  • Map of Off Leash Dog Friendly Parks in NYC

    All the information above was taken from https://www.nycgovparks.org/facilities/dogareas and encoded to the best of my abilities. I’ve tried to encode the rules for the specific park in the description when you click on the pin but if you know more specific details let me know in the comments and I’ll do my best to keep this map up to date. Enjoy!

  • How to make Kaleidoscope your default Git Diff and Merge tool

    First you need to grab the ksdiff command line tool and install it

    https://www.kaleidoscopeapp.com/ksdiff2

    Then the following set of commands will set kaleidoscope to be your default diff/merge tool

    git config --global diff.tool kaleidoscope
    git config --global difftool.kaleidoscope.cmd '/usr/local/bin/ksdiff --diff "$LOCAL" "$REMOTE"'
    git config --global merge.tool kaleidoscope
    git config --global mergetool.kaleidoscope.trustExitCode true
    git config --global mergetool.kaleidoscope.cmd '/usr/local/bin/ksdiff --merge "$LOCAL" "$REMOTE" --base "$BASE" --output "$MERGED"'
    

    Then to run it

    $ git difftool FILE1 FILE2
    $ git mergetool

  • Mindfulness Twitter

    Mindfulness Twitter

    I’ve started a serious mindfulness practice and it’s been really great. I’ve been practicing on and off for years, but my wife has never really given it a go. She, like many, doesn’t really feel anything or understand what it is they’re supposed to feel. What does success look like?

    Different analogies work better for different people. The one that has worked particularly well for me has been two parts.

    1. Your thoughts can come and go, and you can observe them separately from interacting with them, like sitting on a park bench watching people walk past.
    2. When you inevitably get lost in thought, you can bring yourself back gently to the practice like taming a wild horse. Slowly, pulling in further as the thoughts circle around you.

    This didn’t work so well for my wife, but recently we’ve stumbled on a breakthrough. Your brain is like Twitter.

    You can imagine your mind like the Twitter timeline. You don’t really have much control over what tweets appear, save for a little bit of signal based on who you follow initially. Your thoughts are the same, you can control your environment and what information you’re exposed to – but which thoughts show up, you have little control over.

    Further, like the Twitter timeline you have a choice. You can look at each Tweet and let it scroll past or you can engage with it. When you engage with a Tweet by clicking like, retweet or commenting that signals to the Twitter algorithm (which is optimizing for engagement, not what you’d actually like to see) that it should show you more like that. Again, the analogy holds for thoughts. Thoughts appear in your head and you can look at/observe them without engaging further. Or you can engage and your mind is likely to show you more of the same.

    If something is making you angry, and you think you can stay angry without your mind constantly generating reasons why you’re perfectly justified to be angry, you are mistaken. If something your timeline is making you angry… Ok, I think you get it.

    I always pick on Twitter on this blog, but the same can be said for any social media with an algorithmic controlled timeline. Facebook, Instagram, etc.

    I think ultimately this is exactly why these algorithms are so addictive. Because they mimic the natural thought process so incredibly well. Just like opioids are addictive because they mimic endorphins. They’re not the same, but tell that to your brain.

  • Why am I not motivated in this excellent situation?

    Once upon a time in 2011, I serendipitously stumbled on to a question on a stack exchange site I had never been to, and would never visit again. The title was so click-baity I just had to click!

    It was another programmer asking for help: Why am I not motivated in this excellent situation? As it happened, I had been casually studying this exact topic for the past few years and thought I knew the answer.

    The personal productivity stack exchange has since gone away but I want this question to live on here for two reasons:

    First, it gets to the heart of something that I think is counter intuitive (or at least counter narrative) about human motivation and so I think the information might be useful to others searching for their own answers to this question.

    Second, since I’ve read the details of this question it has continued to haunt me. I think about this question all the time. Until David asked this I had never considered the business model proposed – and now it’s all I think about. It has lead me to a deep seeded belief that programmers should be getting royalties, and backwards from that, that programming is more like writing than engineering.


    Question

    Why am I not motivated in this excellent situation?

    I am working as a freelance contractor. For a long time I have been paid by the hour. This has worked fine and my motivation has never been a problem. Now, I have gotten a deal where I get half oa ny increased profit that are due to my actions/ideas etc. This is an excellent deal, which would most likely raise my income ten-fold.

    My problem, however, is that the deal caused me to completely lose my motivation. Meaning that I have literally not done any meaningful work for them for about three months. Why is that? How could such an excellent deal cause me to loose motivation? I am after understanding this to depth so please only answer if you have specific references.

    David

    Answer

    As silly as it sounds, getting paid more money actually DECREASES performance for non-trivial tasks (see references). This problem has been studied a lot in behavioral economics, and psychology.

    The problem is one of Extrinsic Motivation replacing Intrinsic Motivation.

    Intrinsic motivation is your innate desire to do a good job. It’s what you feel when you’re working on something you want to be working on because you yourself want to see the project completed. Working on hobbies, or learning new non-work related skills are examples of intrinsic motivation.

    Extrinsic motivation is when you receive something in exchange for your efforts. When you are paid to do some job, or when you receive a grade in school.

    Intrinsic motivation is much more powerful, people who want to do a good job often produce much better work than people who are merely getting paid to accomplish a task. The terrible thing is that our brains are wired to replace intrinsic motivation with extrinsic motivation almost at the drop of a hat.

    These people explain what’s happening far better than I can:

    In Dan Ariely’s book The Upside of Irrationality, he talks about pay for performance bonuses and how they actually affect our behavior. http://danariely.com/2010/06/20/a-talk-i-gave-at-poptech/

    Joel Spolsky also wrote a great article about it, talking about management. http://www.joelonsoftware.com/items/2006/08/09.html

    From the article:

    “But when you offer people money to do things that they wanted to do, anyway, they suffer from something called the Overjustification Effect. “I must be writing bug-free code because I like the money I get for it,” they think, and the extrinsic motivation displaces the intrinsic motivation. Since extrinsic motivation is a much weaker effect, the net result is that you’ve actually reduced their desire to do a good job. When you stop paying the bonus, or when they decide they don’t care that much about the money, they no longer think that they care about bug free code.”

    Another example of Over justification Effect is the Candle Problem: http://en.wikipedia.org /wiki/Candle_problem

    Great explanation of the Candle Problem at TED: http://www.ted.com/talks/lang/eng /dan_pink_on_motivation.html


    I’ve tried to keep the formatting and wording of the original as accurate as possible. I never found out what happened to David; how this deal worked out in the end. He left a very touching comment at the time: “Thank you, I will use this information everyday for the rest of my life”. I wish I knew how to get in touch with him to tell him how his question has also changed my fundamental understanding of software engineering, and thus my life too.

  • I can probably use this like Twitter and relieve myself of the last social media I am addicted to.

    Honestly social media is really just centralized blogs with RSS and a share button.