All the information above was taken from https://www.nycgovparks.org/facilities/dogareas and encoded to the best of my abilities. I’ve tried to encode the rules for the specific park in the description when you click on the pin but if you know more specific details let me know in the comments and I’ll do my best to keep this map up to date. Enjoy!
First you need to grab the
ksdiff command line tool and install it
Then the following set of commands will set kaleidoscope to be your default diff/merge tool
git config --global diff.tool kaleidoscope git config --global difftool.kaleidoscope.cmd '/usr/local/bin/ksdiff --diff "$LOCAL" "$REMOTE"' git config --global merge.tool kaleidoscope git config --global mergetool.kaleidoscope.trustExitCode true git config --global mergetool.kaleidoscope.cmd '/usr/local/bin/ksdiff --merge "$LOCAL" "$REMOTE" --base "$BASE" --output "$MERGED"'
Then to run it
$ git difftool FILE1 FILE2 $ git mergetool
I’ve started a serious mindfulness practice and it’s been really great. I’ve been practicing on and off for years, but my wife has never really given it a go. She, like many, doesn’t really feel anything or understand what it is they’re supposed to feel. What does success look like?
Different analogies work better for different people. The one that has worked particularly well for me has been two parts.
- Your thoughts can come and go, and you can observe them separately from interacting with them, like sitting on a park bench watching people walk past.
- When you inevitably get lost in thought, you can bring yourself back gently to the practice like taming a wild horse. Slowly, pulling in further as the thoughts circle around you.
This didn’t work so well for my wife, but recently we’ve stumbled on a breakthrough. Your brain is like Twitter.
You can imagine your mind like the Twitter timeline. You don’t really have much control over what tweets appear, save for a little bit of signal based on who you follow initially. Your thoughts are the same, you can control your environment and what information you’re exposed to – but which thoughts show up, you have little control over.
Further, like the Twitter timeline you have a choice. You can look at each Tweet and let it scroll past or you can engage with it. When you engage with a Tweet by clicking like, retweet or commenting that signals to the Twitter algorithm (which is optimizing for engagement, not what you’d actually like to see) that it should show you more like that. Again, the analogy holds for thoughts. Thoughts appear in your head and you can look at/observe them without engaging further. Or you can engage and your mind is likely to show you more of the same.
If something is making you angry, and you think you can stay angry without your mind constantly generating reasons why you’re perfectly justified to be angry, you are mistaken. If something your timeline is making you angry… Ok, I think you get it.
I always pick on Twitter on this blog, but the same can be said for any social media with an algorithmic controlled timeline. Facebook, Instagram, etc.
I think ultimately this is exactly why these algorithms are so addictive. Because they mimic the natural thought process so incredibly well. Just like opioids are addictive because they mimic endorphins. They’re not the same, but tell that to your brain.
I can probably use this like Twitter and relieve myself of the last social media I am addicted to.
Honestly social media is really just centralized blogs with RSS and a share button.
Op-ed: Breaking up big tech
by Jim Wallace
Recently the Department of Justice announced they were going to start investigating the large tech companies for antitrust violations, potentially leading to breaking up big tech. It’s not surprising; large tech companies love to tell the story about how they are neutral platforms or common carriers and, thus, not responsible for the content others upload. This seems incredulous. How can one be a neutral platform and have a recommendation engine that chooses what subset of the data to show? These services could and should be separate: the platform that holds and distributes the data should serve 3rd parties that compete on the best way to display that data. This model mirrors how regulators decoupled power generation and transmission to protect consumers.
Almost all of the problems with social media, from the perspective of its users, come from the recommendation engines and algorithmic feeds that amp up controversy in the name of engagement. Those engines work for the advertisers — the real customers — not the users. It’s quite possible we don’t have the right technology or the right incentives to make a single technology service that works for everybody. Even if we do, it seems unlikely that a single company will get it right. In fact, as controversy after controversy makes the news, there seems to be ample evidence that none of them have gotten their algorithms right, based on the antisocial outcomes we are seeing.
In a decoupled model it wouldn’t be up to a single company to get the algorithms right. Instead data scientists at many companies could create competing algorithms, and users could then pick and choose the view they want. This model has proven effective in other markets. The Associated Press, for example, provides a stream of news stories, and news organizations then choose which ones to publish that are best for their respective audiences.
In this model a company like Facebook would be split into two companies. One company would collect and, for a reasonable fee, distribute posts in chronological order to any company. The second company could then display those posts however they feel is best for their audience.
Twitter is already closest to this model, because they license their data through the “fire hose”.
The fire hose is a service where every time someone tweets, Twitter passes the tweet along unfiltered to the company that subscribed to the service. However, the terms of service and API updates prevent companies from using the fire hose in creating competing views of Twitter content. That’s something the DOJ could make illegal, just as they did with Microsoft in the early 2000s. They forced Microsoft to make its private APIs public so that 3rd parties (like Netscape) could compete on an even playing field.
If Facebook, Instagram and YouTube were forced to offer a service like the Twitter fire hose, one can imagine a whole slew of new innovative 3rd party services such as a ”life stream” that aggregates the updates from all the people you follow across all the platforms in one place. Parents could subscribe to a kid friendly version of YouTube, as a paid subscription, that is not trying to get you to spend more time on the site by hacking your dopamine system. Such specialized service could help parents struggling to setup some screen time boundaries for their children. Companies looking for competitive differentiation could even extend the platform to include things that users want but large tech seems deaf to, such as an edit button for tweets.
This model can generate plenty of revenue for both platform and providers. ConEd and the AP both use this model. Cable companies today make money hand over fist selling access to their pipes.
This is not to say that large tech companies shouldn’t produce their own algorithms. Rather, they should not be the only companies allowed to produce them. We need competition to bring the best services to consumers.
To be sure, some may feel that social media, and the algorithms included, already work well based on some of the positive benefits we have seen. No one doubts the role it played in the Arab Spring. It is incredible when someone has a question about rockets and both Elon Musk and John Carmack respond! But neither that interaction nor the Arab Spring depended on an algorithm to facilitate them.
We don’t have to throw out the baby with the bathwater. We can keep what’s great about these platforms while tempering the parts that induce anti-social behavior, through competing algorithms and user choice.
I decided to publish this op-ed here after Jack Dorsey wrote a tweet thread about opening up the platform. I thought this would be a good time to post. Looks like @Jack has been reading my unpublished work from this summer 😉
I like listening to Cortex because they are thoughtful about how they do work and they always give me something to consider. They recently had a discussion about how New Years Resolutions are terrible and instead you should have themes for the year.
In the past I’ve been convinced that Goals are a really crappy way to do things. You set a goal, and are immediately in a mode of failure because you haven’t achieved it. Then you hit the goal and feel great for a little while, but a day or two later have to set another goal and be a failure again. It seems like you’re failing to meet your goals for much longer than you’re succeeding at them. While Goals may seem like a good way to ensure you have a growth mindset, in practice I think they have the opposite effect. Once you’ve reached a goal, you stop doing the thing you were doing that allowed you to achieve it (I did it! I ran a marathon, I can stop running now!)
Themes are better, these are a process not an outcomes. My theme for 2018 is the Year of Focus. With a theme I have a framework for making decisions that affects every choice I make. There’s no “Drink less” goal, instead I’ve cut back on alcohol consumption because hangovers are anti-focus. I’ve started trying to schedule out my days a little more so I can have long blocks of uninterrupted time to really get into flow, and I’ve tried to cut out other sources of distraction in my life.
Before I decided on the theme for 2018, in mid-late November I blocked Facebook and Twitter on all my devices. I did this after watching this video which made me realize that I would not really be missing out on much https://www.youtube.com/watch?v=3E7hkPZ-HTk and I might get back a few things I had lost – some time, ability to focus, and happiness.
The first week felt weird, I didn’t know how to get news anymore. When I felt bored, my instinct was to type Facebook or Twitter into the URL bar (and then have it blocked by some software). It forced me to think of other things to do and other ways I could be spending my time, including ways of actually seeing my friends in real life.
It worked though, I felt better, less distracted. It was amazing to me however that 6 weeks later the muscle memory of typing Facebook into a browser had not gone away and I was still doing it even though I had not been on the service for a long time.
Within 3 days Facebook realized something was up and started sending me click-bait emails “So and so commented on something”, “Someone has posted for the first time in a long time”. After 3 weeks, they started texting me! It was like an ex who drunk dials you.
As I type this I’ve begun the process of downloading all my Facebook data (photos, posts, etc.) and after that’s complete I think I’ll be removing my account permanently next week.
If you’re thinking about doing the same, I’d say that Apple News is a great platform for getting news if you don’t want to hop from site to site.
* My archive just completed, 96Mb for all of my Facebook activity including Photos over the last 10 years
A colleague of mine today sent me this link from Hackernews that explains options ownership. I think it’s great! More people need to understand this stuff, and I’m very happy this is out there. It does remind me however of how complicated everything has gotten and that always makes me ask why?
My colleague had a great observation:
I used to grant people options and I thought I understood most of it, but it seems like in an effort to get billion dollar valuations later stage companies have added all sorts of complicated conditions
When I was younger (so much younger than today) – I used to think that complexity was a sign of how smart everyone was, and I had a bit of imposter syndrome thinking I was not smart enough to be in the real world because I didn’t understand all of this stuff.
Now that I am older (perhaps wiser? tbd) I now am more confident in my intelligence and what I see instead is people hiding in complexity.. specifically in levels of abstraction. Dan Ariely did a test that I think is illustrative if not conclusive. He put 6 – $1 bills on a plate in a shared fridge in a college dorm. A week later he came back and they were all still there. He then put 6 cokes in the fridge and a few hours later they were all gone! No one would steal money (that would be wrong!), but move the level of abstraction up 1 level (a coke = $1), and people suddenly have fewer issues taking the cokes.
I see the same thing in finance. People trade all these derivatives, each one of which is another layer of abstraction away from money. Options are a layer of abstraction above shares (they abstract time), while shares are an abstraction over ownership, which is an abstraction over assets, which is an abstraction over money which is itself an abstraction. It’s a long way between that awesome trade you made and the person paying real money for their mortgage whom you just screwed over. It enables them to do some shady things they would never do to a real person (they would never go into a real person’s wallet and take their money physically).
I see the same with private companies and stock, stock options, etc. I think people just assume they are not smart enough to understand, they just hear stories that stock is what you want, that’s how you get rich! But I agree with my colleague above, people are using levels of abstraction to confuse people so they can play games.
Once upon a time a stock would pay dividends because as a partial owner in a company you were entitled to a partial share of the profits. That dividend was that share of the profit! You could then turn around and use that dividend to buy more shares, and then the next time it was paid you’d get even more money! This is essentially the proof Benjamin Graham used to show that stocks would outstrip bonds back in The Intelligent Investor and kicked off the era of value investing where calculating the price a company should be trading at was based on the profitability of the company and the number of shares outstanding. Until the 1980s.. where you can blame Microsoft for coming up with the idea of “growth” companies that don’t pay dividends but instead pay back their shareholders with the growth of the value of the share price. Where it’s worth more because more people want it? <– This is some people’s definition of value, but I can’t use that to make a prediction about what the share price should be and so to me this is useless
The following are a set of thoughts I’ve had watching a company grow from 60 people to 600 people – it’s not a complete thesis, but I wanted to put it out there to start getting feedback from people.
Thoughts on Organizations
Processes are sets of rules in the same way that computer programs are just sets of rules. In the case of programs, it’s the computer doing the work of executing the rules of the program and they have no choice but to take what you wrote literally. In the case of processes, it’s people executing them. However, both have bugs – unexpected outcomes of the rules. With humans, you get some leeway because you can explain what you meant or you can say we’re implementing the ‘intent’ rather than what’s actually written but you will run into people who execute policies like they are computer programs and follow exactly what’s written, as opposed to what is intended.
Making matters worse, policies often have very long feedback loops before the bugs are detected and can be addressed. As such I think we should try and avoid process until absolutely necessary. Too many people want to rush to create a process every time someone makes a mistake so that mistake can never happen again, without regard for the bugs that can be introduced as a result.
Trust, Talent and Communication vs Rules
Why create rules at all? I think this is the same as asking why society has laws and I think the reason is because you can’t trust people to do the right thing. More specifically, there’s a limit to the number of people whose reputation you can keep in your head at any one time. This limit is called the Dunbar number, named after the social scientist who studied why tribes separate into two tribes. He found that after about ~120 relationships, we can no longer keep track of who owes whom money, who is trust worthy, who likes to short change people, etc. Rules are an abstraction over behavior. If we all agree to follow the rules, we can use them as a short cut for knowing someone’s reputation. The reason I can go to the store and buy a bacon, egg and cheese from a complete stranger is because I have trust in the rules for proper food storage and preparation.
I think that organizations that are growing don’t need any policies until they reach this 120 size. After that we start seeing faces around the office that we don’t recognize, and we hear about projects starting up being lead by people we have never heard of. People who’s reputations we don’t know/trust.
Minimum Viable Process
So what do we do? Policies (and laws) are really useful abstractions. They allow us to trust each other without actually knowing the individual’s reputation the same way I trust the food cart guy is not going to poison me because of the FDA. However, policies like all rules – have bugs.
The #1 predictor of bugs in code is the number of lines of code. Each line is a little rule, and the more rules you have the more likely you are to introduce bugs. Since policies are rules that often contain more rules (the entire workflow is a set of rules to follow), the more policies you have (or the more complicated they are) the more likely they are to introduce bugs, and so the goal is to have the smallest set of effective policies possible.
How do you accomplish this?
Developers find bugs by compiling and executing their code, thus seeing that it does not do quite what they expected. It’s a tight feedback loop that allows them to identify and resolve bugs quickly. What we need from policies is a similar feedback loop and the easiest way to do that is with this 1 weird trick.
Make people feel the consequences of their decisions.
That’s it. Ok maybe the golden rule is nice too, do unto others as you would have them do to you is probably always a good rule.
What does this mean? It means one can never make a rule for someone else, that they themselves don’t have to follow. The reason is so they can get immediate feedback on both the good and the bad of the rule and make adjustments accordingly. This is remarkably difficult in practice. People really really don’t like feeling the pain of their decisions, and we set up all sorts of elaborate systems to protect ourselves from get that feedback. I don’t think anyone making other people’s lives hell on purpose, we do this almost subconsciously.
Paul Grahm has decided to take up the old torch of more H1B immigration because “there are not enough great programmers”. In the second paragraph he says that people who disagree with him are “anti-immigration” people who don’t understand the difference between good and great programmers.
I’m all for completely open immigration; let people who want to work do so wherever they’d like. However, I am tired of hearing this false rationale that “it’s because there are not enough good programmers”. All I’m asking is for people who make this argument, not to base it on provably false accusations/assumptions. Make an economic argument for completely open borders. Talk about lifting the employer restrictions for H1B… but when you do it the way Paul has done, it’s completely transparent that what you want is NOT that.. what you want are cheaper programmers who can’t leave your company when you abuse them, or find a higher paying job elsewhere. You want indentured servants, and it’s unethical and gross to me.
Is it really too much to ask that people base their opinions on evidence (data rather than anecdote)? The problem is the evidence doesn’t support the “not enough great programmers” claim:
A great meta-analysis type article that looks at several studies with links to each and a description of the pros and cons of the data: http://spectrum.ieee.org/at-work/education/the-stem-crisis-is-a-myth
The most recent raw data I’ve seen on the subject: http://www.epi.org/publication/bp359-guestworkers-high-skill-labor-market-analysis/
But while we’re talking about immigration, I’ve always wondered why it’s so important that the developers be great? What about great business people? Where’s the call for H1B CEOs? Why is the onus of failed startups that they couldn’t get enough *great* developers, as opposed to the mediocre business idea that failed in the market?
Why do I never hear this argument for immigration? Why is it only STEM?
And why does this myth persist in the face of evidence?
Eric Sink is disturbed by the tone of people’s reaction but I think it’s perfectly reasonable for people to be upset when someone starts off accusing anyone who disagrees with him as being anti-immigration, or questioning their knowledge.
I saw this tweet today
I’ve been confused for a long time about why this isn’t how modern startups are run. This is the exact model I had in my head in 2001 when I wanted to start my own business, and for every startup idea I’ve had. However, I’ve seen so many multi-billion dollar valuations of companies that essentially have no revenue that I’m starting to wonder…
Is this a class thing?
I’ve gotten to know a few rich people in NY, and I can tell you that none of them thinks this way. I can’t tell if they’re rich because they see the world differently from other people (I sure see the world differently than my parents and have a lot more money) or are these people crazy and excited about no revenue business models because they don’t have to worry about making money?
It is difficult for me to maintain my view of the world (businesses should make money) with the data I’m getting about 0 revenue businesses being valued in the billions. I’ve heard it put this way: Having zero revenue is great because it allows you to sell the dream of, when we monazite, just imagine how much money we’ll make. Where as having a single dollar of revenue changes the conversation to why do you only have $1? And then the dream is *poof* gone when it’s confronted by actual data” (mostly I’ve heard this from Felix Salmon) This makes me think that it’s about duping people and not about creating value at all. :-/