Bytes and Bikes

Plus other interesting stuff. But mostly computer software and mountain biking.

Book Review of Shape Up: The Hipster’s Waterfall

I have finally finished reading Shape Up: Stop Running in Circles and Ship Work that Matters by Ryan Singer after many months. Now that I’ve made it through the book, I would subtitle it “The Hipster’s Waterfall.”

To be fair, it is a thought-provoking book. It has some good ideas for new techniques in managing software projects. It is well written and clear, and I think Singer does a decent job of arguing for the processes and techniques he recommends. I just don’t agree with some of his more foundational points and therefore I do not agree with the general thesis of the book.

More on my disagreements later; first, I’d like to point out the pieces from the book that I think are sound.

I think that the general concept of shaping work before working on it is excellent. The high-level idea is that you would make some rough designs and sketch out the flow and functionality of the work you’re about to do. This helps flush out any major issues with the design before getting into the details. The three properties of shaped work that I agree with are:

  1. It’s rough: the initial design should be at a high level.
  2. It’s solved: the solution to the problem has been well thought out. Open questions and rabbit holes have been resolved.
  3. It’s bounded: the scope – and what is out of scope – is clearly defined.

I find doing this type of design beneficial. It helps point out serious flaws with initial ideas.

Along with the general idea of shaping, Singer presents some ideas for creating rough designs – breadboarding and fat marker sketches. Breadboarding (named after the breadboard used for prototyping electronic circuits) is a way of sketching out a flow – basically a more informal version of a flow chart. Fat marker sketches are rough sketches of a user interface design. They are meant to show all of the critical parts without dictating the final design. Both of these ideas seem useful, though perhaps not very original.

The one thing in this book that I think is original and could be useful in practice is the hill chart.

A hill chart is a convex curve that is used to communicate progress on a work item. As you are making progress on the item you’re working on, you move a dot representing the item from left to right on the hill chart. If you are still figuring out the unknowns, approach, and design of the item you’re working on, then you would place a dot on the uphill part of the hill chart. If you’ve figured things out and are simply carrying out the plan to finish the item, then you would move the dot to the right on the downhill part of the hill chart.

Now that we’ve covered the pieces in the book that I do agree with, I’d like to move on to the fundamental issue I have with the book. I would very much dislike being a developer or a designer at Basecamp. In the section Who shapes, Singer paints a very clear picture of empowered shapers and indentured delivery teams. What I mean by delivery team is a team that simply takes tasks as input and produces software as output. They are not responsible for solving business problems. They are only responsible for producting code that meets some requirements.

Marty Cagan’s writing has influenced me greatly, and I have learned that my desire is for empowered product teams. The empowered product teams that Cagan describes seem to be the direct opposite of the teams that Singer describes in this book. Basecamp doesn’t have a cohesive team that collaborates amongst themselves and with customers to reach a solution to a business problem. Basecamp has a team that decides on which business problem to solve, designs a solution that is meant to solve the problem, and then hands it off to other people to implement. I am not a fan.

In his articles, Cagan points out that there are four risks associated with creating or improving a product: – Value risk (will people buy it, or choose to use it?) – Usability risk (can users figure out how to use it?) – Feasibility risk (can we build it with the time, skills, and technology we have?) – Business Viability risk (will this solution work for the various dimensions of our business?)

With an empowered product team, the team is responsible for all of these risks. The team’s Product Manager is responsible for value and business viability. The Designer is responsible for usability. The Technical Lead is responsible for feasibility. This works well because each responsibility is owned by someone on the team who can either communicate directly with the people doing the implementation or perform the implementation themself.

In the process that Basecamp uses, all of these responsibilities partially or wholly lie with the shapers, and the actual implementation of the solution is handed off to another team – a delivery team. I think a good term for this type of process is insourcing. The shapers identify a problem their customers have, design a solution to the problem, and decide whether it’s usable, feasible, and a fit with the business. Then they hand the work off to their insourced design and development teams to implement the solution.

Another issue that I have with this book is its recommendation of six week cycles. Singer claims, “six weeks is long enough to build something meaningful start-to-finish and short enough that everyone can feel the deadline looming from the start, so they use the time wisely.”

Maybe they don’t have any procrastinators at Basecamp, but I think most would agree that a six week deadline doesn’t loom very large. Anyway, I think Singer is missing the point of why you would want a shorter iteration. A shorter iteration provides more frequent opportunities to interact with the customer and course correct.

The other benefit of shorter iterations is smaller batches of work. There is power in small batches

There is a name for a process that features up-front design which is handed off for development and long feature development cycles. The name for this process is waterfall. Basecamp uses waterfall with some minor adjustments to make it more palatable.

The interesting thing is that Basecamp is clearly making waterfall work for them. There is no denying they have a successful company, so no one is really in a place to criticize their approach. I personally wouldn’t enjoy working there, but when you have bootstrapped your company and are profitable you have a lot of flexibility to run things the way you want. And therein lies the true success of Basecamp.

So, in summary, I wouldn’t recommend the overall process that Singer recommends in Shape Up, but there were still some tidbits of wisdom within. I don’t recommend reading this one.

Book Review: Prey by Michael Crichton

I did not enjoy Prey as much as I thought I would. I think there are a couple of things that I struggled with while reading it.

The first thing I struggled with was actually a problem with the type of topics that Michael Crichton likes to take on. I think of his writing as near-term science fiction. It’s definitely science fiction, but it isn’t far-fetched science fiction that takes place centuries in the future. It’s the type of science fiction that I could imagine happening tomorrow. Usually, I appreciate that, since it explores ways that the world could be different today. However, it can also backfire.

One way that near-term science fiction can fall short is in longevity. If the author is making predictions about how present day technology will evolve then people don’t have to wait very long to see whether the other was right. And if the author wasn’t right about the evolution then readers may soon lose interest.

I think that missing the mark on predictions of the evolution of technology is where Prey falls short. Without giving too much away, one of the premises of the book is that large swarms of nanobots can evolve to the point where they can have a sort of intelligence and learn new things – with very minimal programming.

I think at this point the development of artificial intelligence and machine learning has shown that this isn’t really possible. The algorithms for machine learning that computer scientists have developed to this point show some promise, but not the level of learning the Crichton portrays in Prey. And especially not with swarms of low-resourced agents.

Prey was a mildy interesting book, but I don’t think I would recommend that anyone read it. It wasn’t terrible, but it certainly wasn’t great.

State = Busy

I have been very busy, but that’s no excuse. I’m lowering the bar for things that deserve a blog post. This particular post will feature things I’ve been thinking about and doing.

Things that have happened

My sister and her kids came to visit a couple of weeks ago. It was a fun visit! Everything was made more complicated by COVID, but it all turned out OK.

I crashed my mountain bike on Sunday, which was not fun. It was one of the higher speed crashes I’ve had. My front tire washed out on a corner and I slid quite a ways, scraping up my knee (even though I was wearing knee pads) and somehow hurting my pinky, which has been the most debilitating result of the crash.

I received a promotion at work. I am now leading the Infrastructure squad at Seeq. This is exciting for me, since I enjoy working on cloud-based infrastructure and development tools, which is what the team is responsible for.

Books I’ve been reading

I finished Prey by Michael Crichton. I plan to write up a book review of it soon.

I also finished reading Being Mortal, which was a superb book. I’ll have to refer to it again the next time I’m faced with the mortality of myself or someone I love. I should also write up a review of that book.

I have been working on another excellent book, which I hope to finish soon, Seeking Allah, Finding Jesus by Nabeel Qureshi. I’ll post more thoughts when I’ve finished it.

Projects I’ve been working on

I have been plugging away at editing a YouTube video of a morning ride I did in Reno. It’s been so long since I’ve edited any video that it’s proving a bit time consuming. I hope to have it finished and posted soon.

Tralina and I have also been getting more serious about cooking. Tralina purchased a Shun santoku knife, which is very nice. I will be getting a chef’s knife soon. I have been browsing Forks Over Knives quite excitedly, looking for recipes that look good. Today I cooked Mediterranean Vegetable Spaghetti, which was not very good and definitely did not meet Tralina’s approval. I also made a “No-Tuna” Salad, which Tralina did approve. She said that it reminded her of the tuna salad that her mom used to make before I even mentioned that it was supposed to be a tuna salad substitute. So I call that a win!

What’s next

It might be nice to post regular updates similar to what Sacha Chua does on her excellent blog. If I decide to do that, I’ll probably want to come up with a consistent format.


I’ve been listening to the book Being Mortal by Atul Gawande. I’m a few chapters in, but I’m already finding the book very informative and thought provoking.

Gawande paints a compelling picture of what it means to grow old in our society and how old age has changed in recent history. Living over the age of 70 used to be a fairly uncommon achievement, but with modern medical advancements, it is almost commonplace. That fact has many implications for how the elderly are treated, and how they want to live in their twilight years.

The point that I have resonated with the most is that at some point, no matter how I take care of myself, my mind and body will start to break down. I will not be able to think as quickly as I can now. I will not be able to remember quite so well. My heart won’t pump blood as efficiently. My bones will become brittle. My muscles will shrink away. As Philip Roth put it, “Old age isn’t a battle: old age is a massacre.”

And there is nothing I can do to stop the decay. I can only put it off through healthy living. So I will seek a healthy life.

But in the meantime, while my mind is still sharp, I have things that I want to accomplish in my career. I want to start a business of my own. I want to make a big impact in every place I’m employed. I want to be influential in my field.

In the meantime, while my muscles are still strong, I have things I want to accomplish as a father. I want to play sports with my daughter. I want to teach her how to ride a mountain bike. I want to take her on a backpacking trip. I want to hug her and swing her around when she visits home after going away to college.

In the meantime, while my heart still beats with passion, I have things I want to accomplish as a husband. I want to take my wife on exotic trips. I want to provide a comfortable house for her to live in and a beautiful yard and garden for her to enjoy. I want to dream with her and partner with her to fulfill those dreams.

In the meantime, while my soul still burns with devotion, I have things I want to accomplish as a child of God. I want to introduce someone to Christ. I want to nurture another small group. I want to share my testimony with someone who needs to hear that others have been there, too.

And then, when old age sets in and my mind dulls, my muscles atrophy, my heart slows, and my soul flickers, I will look back on how I did not waste the strength, energy, and mental acuity of my younger days. I will reflect fondly with my wife on a life well lived. And I will anticipate eagerly meeting my Savior face to face.

Book Review: Skunk Works by Ben Rich

Skunk Works by Ben Rich is a memoir about Rich’s time working at the Lockheed Skunk Works. The Skunk Works is a small, secretive, advanced research and development organization within Lockheed. Rich relates his experiences from first being tapped by Kelly Johnson – the founder of Skunk Works – to design some systems on the U-2 spy plane to taking the reins when Johnson retired to his own eventual retirement.

The philosophy of the Skunk Works is what stood out to me in the book. Skunk Works isn’t just a secret department within Lockheed. The Skunk Works philosophy emphasizes small, flat, tightly integrated teams that have autonomy to make decisions about their projects. Kelly Johnson’s 14 rules and practices provides a map for the Skunk Works to operate. Engineers are required to visit the shop floor to interact with those machining the parts and assembling the aircrafts. Outside inspections are kept to a minimum. Unnecessary documentation must be eliminated, but important work must be comprehensively reported.

It is difficult to argue with the results of the Skunk Works’ approach. Guided by Johnson’s 14 rules and practices, the Skunk Works was able to develop aircraft that were in some cases decades ahead of anything their competitors could design or build. The U-2 spy plane set elevation records that made it untouchable by enemy aircraft or missiles. The SR-71 blackbird still holds the airspeed record for a manned airbreathing jet engine aircraft… which was set 44 years ago in 1976. The F-117 Nighthawk was the very first stealth aircraft – designed not to reflect enemy radar. Only one Nighthawk was ever lost in combat. And they often created these airplanes and others ahead of schedule and under budget.

Ben Rich personally worked on every one of the aforementioned aircraft, so Skunk Works includes many fascinating anecdotes about the technology, politics, missteps, and successes that went into each one. Rich also relates accounts of other Skunk Works projects – some successful, some not – as well as personal accounts of his relationship with Kelly Johnson and other aspects of his life.

Some particularly interesting sections of the book are anecdotes provided by people other than Rich. I enjoyed hearing from test pilots, politicians, military brass, and Rich’s coworkers. Some of these accounts really helped emphasize the impact that Rich and the Skunk Works have had over the past seven decades.

I think that there is a lot to be learned from the Skunk Works approach to work. It’s possible to accomplish a great deal with small, efficient, autonomous teams. I was inspired by this book to pursue a similar approach in my own work, and I was spellbound at points that such great feats of engineering could be accomplished with so little.

Clojure Web Application Building Blocks

As I’ve mentioned before, I’m rebuilding this website using Clojure.

When starting a new project, I find you often have two choices: start with a batteries-included framework or build up the framework yourself from your own code and any libraries you might want to leverage. I usually choose the first when I want to get something up and running quickly. I choose the second when I want to learn as much as I can and have fun tinkering along the way.

This time, I am choosing to build up the framework of my application myself. In the past I have used a baterries-included web application template, chestnut, but this time I decided not to go that route. One reason I decided not to use chestnut is that I don’t think I’ll need any client-side cljs code. Chestnut’s main use case is compiling and reloading cljs.

The only functionality I want in the first iteration of my framework is markdown to HTML parsing, an HTTP server to serve that HTML, and live reloading (reloading of the page when something server-side changes).

To support those features, I have started evaluating the libraries I want to use:

  • These are micro-frameworks for managing dependencies, keeping things loosely coupled, and setting up a reloaded workflow for development. Integrant is currently the frontrunner.
    • Integrant
    • Mount
    • Component – I’ve used Component once before, but Integrant claims to solve some of the difficulties I had with Component.
  • Ring – the de facto standard Clojure web application library.
  • I have decided to use bidi as a routing library because I don’t like Compojure.
    • bidi – a routing library that just uses data structures for defining routes.
    • compojure – a routing library which I find distasteful due to its unnecessary use of macros.
  • hawk for watching files to kick off live reloading.

I would like to keep the libraries I’m using fairly minimal. I may add some more to this list for things like generating HTML, but I’d like to write a quite a bit of code myself.

I feel good about choosing to build up the framework of my application myself. Getting to know each piece may help me develop the application even more quickly in the long run.

Did Abraham Believe God?

I have been reading Saying No to God by Matthew Korpman. The general thesis of the book seems to be that God invites us to stand up to Him and argue with Him as part of our relationship with Him. I think I generally agree with that viewpoint. However, I’m not yet half way through the book, and after the second chapter I almost stopped reading because of one assertion.

The assertion in question is in the chapter titled “Abraham Didn’t Believe God.” In this chapter, Korpman tries to convince the reader that Abraham never believed that God wanted him to kill Isaac. I find numerous problems with Korpman’s arguments throughout the chapter, but I will focus on this single point, since I absolutely cannot accept it. While discussing Abraham’s unfulfilled sacrifice of Isaac, Korpman states,

Many have long attempted to suggest that Abraham trusts that God has the ability to resurrect Isaac back from the dead after he kills him. Ignoring how gruesome that idea is at face value, the historical reality is that this is simply not possible. The idea of resurrection, historians are aware, did not exist for either Abraham or the Israelite authors of Genesis. It was an idea that first appears in the prophet Daniel and only became popular at the time of Jesus. Entire books exist to demonstrate why we know this with absolute precision. As such, scholars can confidently rest assured that whatever Abraham is doing when he suggests that he and Isaac are returning, it isn’t due to resurrection.

I have a couple of big problems with this statement. First of all, it’s simply ridiculous to assert that the idea of resurrection did not exist for Abraham. There is no way that you could prove that from scholarship… Second and more importantly, Paul, in his inspired letter to the Hebrews states,

By faith Abraham, when he was tested, offered up Isaac, and he who had received the promises was offering up his only begotten son; it was he to whom it was said, “In Isaac your descendants shall be called.” He considered that God is able to raise people even from the dead, from which he also received him back as a type. Hebrews 11:17-19

So indeed, “many have long attempted to suggest that Abraham trusts that God has the ability to ressurect Isaac.” In fact, Paul attempted to suggest that. And I believe Paul over the entire books that exist which would suggest otherwise. This argument of Korpman’s makes me wonder if he discounts this statement of Paul’s because of advances in scholarship or if he just wasn’t aware that the argument originally comes from Paul. After reading a good chunk of the book, I think it must be the latter. If he was aware this argument for resurrection came from Hebrews, I think he would have at least mentioned it and felt he needed to justify his statement against it. Therefore, the rest of Korpman’s book loses significant credibility for me.

It’s fairly clear to me that Abraham did believe God. Most of all, he believed God’s promise that “in Isaac your descendants shall be called.” He believed that no matter what God had him do, God would fulfill that promise.

Clojure Markdown Parsing Benchmarks

I am working on setting up a new system for publishing content. I have a few different categories of content that I’m interested in creating. I’ll have to determine exactly what the taxonomy will be, but the broad categories will probably be computers, mountain biking, and more personal stuff including relationships and religion. The first step towards this new system is just to replace the technology behind this website.

This website is currently generated statically using a very old version of jekyll/octopress. Static site generation is really nice, but I think I’m going to want to add some more interactive features like small applications. Therefore, I decided to replace this static site generation approach with a Clojure application.

Since these posts currently are all written in markdown and then parsed and rendered into HTML before being served statically via nginx, I wanted to check to see how expensive it would be to parse and render the markdown into HTML on every page load. To evaluate, I used a couple of handy Clojure libraries – markdown-clj and criterium. Using markdown-clj it was fairly trivial to replicate the functionality of the markdown processing of octopress. It is even has the ablility to parse the metadata at the top of the markdown files. For example, this is the metadata that I have at the top of this post:

layout: post
title: "Clojure Markdown Parsing Benchmarks"
date: 2020-07-13 10:45:32 -0700
comments: true
categories: clojure programming

To parse that metadata, I simply had to pass in the :parse-meta? true option when parsing, like this:

(md/md-to-html file-name writer :parse-meta? true :reference-links? true)

Then the metadata is parsed nicely into a map for me:

:metadata #ordered/map ([:layout "post"] [:title "Clojure Markdown Parsing Benchmarks"] [:date "2020-07-13 10:45:32 -0700"] [:comments true] [:categories "clojure programming"])

You can see more detail in the source code on github.

Finally, I created an uberjar using lein uberjar, uploaded it to the DigitalOcean machine I intend to use, and ran the benchmark using criterium:

(crit/with-progress-reporting (crit/bench (md/parse-post "posts/") :verbose))

Because of this issue, I also had to call flush afterwards to get the output to display correctly. Again, you can see more detail on github.

Once I ran the benchmark, criterium gave me some useful results:

Evaluation count : 3240 in 60 samples of 54 calls.
      Execution time sample mean : 19.447409 ms
             Execution time mean : 19.449686 ms
Execution time sample std-deviation : 909.567764 µs
    Execution time std-deviation : 928.443124 µs
   Execution time lower quantile : 18.431349 ms ( 2.5%)
   Execution time upper quantile : 21.718663 ms (97.5%)
                   Overhead used : 2.936410 ns

Found 5 outliers in 60 samples (8.3333 %)
        low-severe       4 (6.6667 %)
        low-mild         1 (1.6667 %)
 Variance from outliers : 33.6000 % Variance is moderately inflated by outliers

I can see there that it takes about 20ms to parse a typical markdown file for one of my blog posts. That would mean that, ignoring other overhead for serving a webpage, I could serve about 50 pages per second. That seems more than acceptable for the amount of traffic I expect to receive on this blog.

New Habits

This blog is ancient, but let’s start a new habit with it. Let’s publish something. Every day. It doesn’t have to be on the blog, but writing is one of the easier things I can do. I also have a bunch of videos on my harddrive that I could edit and upload to YouTube. I just want to be creating something that I can point to every day.

I am actually a bit surprised that I was able to get the tooling for this blog back up and running after all these years. All it took was installing rbenv, an old version of Ruby – 1.9.3 – and an old version of bundler – 1.0.14 – and then everything worked. I hope to get a new site up before too long. I’ve started working on it here, but as you can see it has stalled. It’s hard to find time for things like that with a baby!

The Great Dvorak Distraction

I am pretty fast at typing. I am by no means the fastest ever, but I can hold my own. I don’t really worry about how fast I can type, but I do slightly worry about repetitive strain injuries. Therefore, I have allowed myself to be distracted from learning to read faster. I have been playing around (yet again) with learning the Dvorak Simplified Keyboard layout, and I am (still) convinced that it is dramatically better than the traditional QWERTY layout.

The Dvorak layout is basically a keyboard layout that makes sense. Instead of the letters and symbols being more or less randomly placed as in the common QWERTY layout, the keyboard is laid out keeping two goals in mind:

  1. Frequently used keys should be easier to reach with the stronger fingers (e.g. not the pinky).
  2. Keys that are frequently pressed sequentually (e.g. consonants are usually followed by a vowel and vice versa) should be pressed by opposite hands.

These two goals make everything ergonomic and efficient. If you would like more details, then I strongly suggest you read this comic about it. It is very informative and mildly entertaining and by the end of it you may share my opinion of the superiority of Dvorak. Or you may not. But you should still read it. Of course, you can always find more detail in the wikipedia entry.

Despite it’s rationality, learning the Dvorak layout is difficult. I have spent a considerable amount of time on it at one point or another, but I have spent almost 20 years learning the normal layout… This makes it difficult for me to switch over to Dvorak completely, which is what I would need to do in order to really get good at it. This isn’t the first time I’ve tried. In fact, the das keyboards that I normally use (yes, I have two), I selected with alternative keyboard layouts in mind. There are no markings on the keyboard, so I feel like it doesn’t matter as much that I keep the QWERTY layout.

Even with prior practice, it is very hard to type at half speed during my everyday tasks. To illustrate, I’ll show you some of my results from (which is awesome). This is how fast I type normally:

normal typeracer

And this is how fast I type with Dvorak:

Dvorak typeracer

If you can imagine with me, typing with Dvorak basically feels like one of those dreams where you are trying to escape bigfoot, but you can’t run any faster than slow motion. It’s slightly depressing.

It gets worse. Shortcut keys. You have to develop completely new habits of using shortcut keys. Control-c for copying and control-v for pasting are no longer right next to each other. You could remap them, but I would not recommend it. Most shortcut keys start with letters for the thing you want to do (e.g. c for copy), so it’s not too hard to remember them. It’s just hard to develop the new habits.

Furthermore, all of the symbols ((), [], +, =, etc.) are in different locations, and, since I don’t type most of those very often, they are more difficult to learn. Coding is especially difficult. Coding is just typing, after all. Unfortunately, coding does slow my typing down, and when I’m using Dvorak it’s even worse. As evidence, I’ll show you some test results from, which has typing tests with real code and is also awesome. Here is normal layout:


And this is with Dvorak:


As you can see, slightly depressing. In order to improve the situation, I thought it would be good to find a layout that put the commonly used coding symbols within easy reach. I did find such a layout. It’s called Programmer Dvorak Keyboard Layout. Don’t use it. I abandoned it for a couple of reasons. First of all, it rearranges all of the numbers. Why? I’m not sure (I guess the Dvorak layout normally had them arranged that way), but it makes it incredibly confusing because instead of 1, 2, 3 we now have 7, 5, 3… Not only that, but it puts the numbers in a shift position! So, to type a 7, I have to press shift and what used to be the 2 key. This was the deal breaker for me. All keyboard shortcuts that included a number were unusable for me on my Mac. I literally could not do the shortcut because command-shift-4 (for example) is its own shortcut combo.

So, I am still learning the Dvorak layout, but it’s hard. I was originally going to write this post using it, but by the time I had done the typing tests, I figured I didn’t want to waste the time. It has taken me a while to get up around 40 wpm, but not as long as I thought — probably a week for the initial learning and then a couple of weeks to improve speed. I plan on continuing to get faster at it here and there, and hopefully I can replace QWERTY completely one day. Because I really do believe in it.