Git Feature Branches

Git makes working with branches extremely easy. Especially compared to many of the version control systems that came before it. This has resulted in a standard workflow involving feature branches.

The idea is to branch from master when you begin developing a new feature. You do all of your development on that branch. When the work on the feature is complete, you merge your branch back to master. This works extremely well if the feature branch does not live for very long. If other people make changes to master before you can merge, a bit of coordination is needed.

Why Feature Branches?

In the past, when people didn’t use a VCS to track changes or when using a VCS without good branching support, you might reasonably make a change and find an error. But, you might not be able to tell if the error came from your changes or someone else’s. If you have a new failure, you couldn’t assume that the only change was the one you just made. It could be any change anyone had made.

Back in the bad old days, we had branches that would live for months. The effort of resolving conflicts between one of those branches and master was so great that we would create separate integration branches that would combine a branch with master to do all of the fixing needed to get the combined changes working. The hope was that we could get the integration branch stable before master changed so much that we would have to do it all over again.

Isolating Changes

A major benefit of feature branch is that it contains only one logical change. The idea is to complete the feature without being impacted by other people’s changes. Then, you can combine the changes that others have made at a point in time of your choosing. This makes troubleshooting any conflicts easier, because you know that your changes worked together, so any problems have to be in the way your code interacts with any changes on master.

Combining Changes

A feature branch allows you to isolate changes while you are working on them, but it does not remove the need to combine your changes with any changes that already happened on master. When combining those changes, you can either combine the changes from master into your feature branch or merge your changes directly into master.

The decision of which way to go has to do with the concept of conflicts. Any change on one branch has the possibility of conflicting with the changes on the other branch. Then either the VCS or the developer will need to resolve these conflicts. In a modern VCS (like git), many kinds of conflicts are resolved automatically. Obviously, if two developers make different changes to the same piece of code, a developer will be needed to look at the changes and decide what needs to happen.

The really difficult part is logically conflicts, where changes on one branch modifies the assumptions that the other branch depends on. This is what normally results in test failures after the code is merged. For many developers, this becomes a reason to combine the changes from master back to your feature branch, so you can fix any inconsistencies before merging back to master.

A good test suite helps keep this kind of conflict to a minimum. Just as importantly, your test suite should reduce the amount of troubleshooting needed when you do find a logical conflict.

Feature Branch Problems

The main point where feature branches become a problem is when they live for too long. The longer a branch lives, the more potential conflicts you will need to resolve when merging back to master. Feature branches work best when you are making a relatively small change, quickly enough that few other changes will overlap your development.

The rate of change of your code base will determine how long a feature branch can safely live. But, in all cases you want to avoid the case where a large number of other changes overlap any particular feature branch’s lifetime.


The goal of a feature branch is to separate changes for a new feature from other changes on the main line of development to simplify the development process. This goal is accomplished by separating problems caused by the implementation of the feature from problems caused by mainline changes that invalidate the assumptions that feature is based on.

By dividing the problem in half, feature branches have proven to be an effective way to develop software more quickly

Different Views of Code

If you work as a developer, you probably spend a large amount of time in your editor. You are adding features, improving tests, looking for bugs, reviewing changes, and otherwise looking at the code.

When I started programming, screens were a lot smaller and almost no one had space on their desk for more than one monitor. If you wanted to get a good view of a large amount of code, you printed it out and went over it with a pencil and highlighter. Quite often, this allowed you to notice issues that you didn’t see in your editor.

The important point is not printing the code, it’s viewing your code in a different way. Writers and proofreaders have known forever that it is sometimes useful to read a passage aloud to spot mistakes. The point is changing how you experience the work.

When you are looking at something over and over the same way, you stop seeing what is there and begin to see what you expect. You develop blind spots around the details of what you are seeing. This is why a different person sometimes spots something obvious that you’ve overlooked.

Different Views of the Code

How are some different ways you can interact with your code:

  • Look at a diff from a previous version
  • Explain your changes to someone else
  • Change the font
  • Change syntax highlighting
  • Look at it in a different tool
  • Print out a file and read it away from the computer
  • Display the list of functions/methods
  • Generate a list of class or module names


If you are using a version control system like git, this is pretty easy. If you are not (you should be), using a standalone diff tool is still worthwhile. The main benefit you get from comparing the current version to the old one is seeing what may have changed that you didn’t intend or forgot.

Did you remove a messy piece of error handling, with the intent of rewriting it once the change was made but forgot it? Did you add an if test without considering the else condition? Did you rename a class, without considering related classes?

Explaining the Code to Someone

This is similar to the writer trick of reading out loud. You are changing the mode that you are interacting with the code, probably in multiple ways.

First of all, you will not read the code token by token. You look at the code and translate it into English (or your native language). This forces you to think about what you are seeing instead of just scanning the code.

Second, you are speaking your understanding out loud, which uses different parts of your brain than reading silently. You are also hearing what you are saying, which gives another mode for interacting with the code.

Third, the person you are explaining to may ask questions for clarification, which makes you less likely to gloss over weak parts of the explanation.

Amusingly the first two effects are so effective that you do not really need to explain to someone who understands the code. I have seen similar advantages with explaining to a spouse (my wife has a technical background, but is not a programmer), a pet, and even a rubber duck.

Different Font

Part of the idea here is to make the code look different. Pick a wider, monospace font to make thin punctuation characters stand out. Pick a less comfortable font to force you to read rather than just skim. Use a tiny font to highlight the structure of the code, instead of just the syntax.

Syntax Highlighting

If you don’t use syntax highlighting normally, turn it on. If you do use it, change the color scheme.

The idea of syntax highlighting is to remind you of how the parser will see your code. The parser is dumb, it does not know what you mean only what you wrote. Sometimes when you make a mistake, it’s because what you meant is not what you wrote.

Changing your syntax highlighting may make you look at the code in a different way.

Different Tool

If you store your code on github, gitlab, or some other version control site, try viewing your code there. Often the other tool will have different display characteristics: font, syntax coloring, page width, etc. This will make you code look slightly different and make reading it a little different from looking at the same editor screen.

Printed Version

It’s kind of old school, but printing a section of code can still be helpful. Unless you have carefully tuned your environment, the text on the page will look a little different than it does on the screen. The sizing may be different. The margins will have a visual effect that they don’t on the screen.

More importantly, you can change your physical environment: move to another location, have different lighting, highlight interesting bits, write notes, etc. You may be able to lay multiple pages out on a surface and see more than you could on one monitor. This makes larger patterns easier to see.

Method/Function List

This approach is more useful for design than troubleshooting. Obscuring all of the code except the method names (for a class) or functions (for a module) is useful for exploring whether you have a single responsibility or many.

It can also help point out if the mental model embodied in the code has shifted over time. For example, do any new methods follow the same terminology as the others.

Class/Module List

This is a larger version of the previous concept. Are the names of the similar modules or classes at similar levels of abstraction? Are they related to the same metaphors? Has the code mixed metaphors in a confusing way?


Many programming errors happen because what you think you wrote is not what you actually wrote. The compiler/interpreter is not smart enough to know what you meant.

When the code doesn’t do what you think it should, sometimes a change in perspective is all you need.

Language Documentation

As I said in my last post, many of the language communities that I interact with now are fond of asserting that their particular community does a wonderful job of testing, documentation, or both. This time, I’d like to examine the claims on documentation.

Once upon a time, all of the documentation for a language either came with the CDs (or other media) when you bought the compiler, or were available for purchase as books. O’Reilly Media started as a small publisher that supplied books on open source tools and languages.

Many modern languages tout their extensive documentation. They talk a lot about their culture of documentation. Most of my recent documentation experience relates to Perl 5, Ruby, and Rust. My experience of Java documentation is many years in the past.

While most libraries in modern languages come with documentation, the quality of that documentation varies. Of the three listed above, I have to say that the documentation for Perl modules in general is much better than Ruby gems or Rust crates.

The Easy Way

Back in the late 90s and early 2000s, Java (and to some extent C and C++) adopted an approach to documentation based on annotating the source code with comments around classes and methods. This Javadoc style did move developers from no documentation to some documentation, but it did not provide more than the bare minimum. Unfortunately, people who learned that style brought it to other languages.

A Documentation Convention

The original Perl 5 module documentation was strongly influenced by Unix man pages. Over time, a convention developed around a series of sections for module documentation. Since all modules follow a similar format, it’s really easy to find what you need. The standard sections are:

  • NAME
  • BUGS/CAVEATS, etc.

Most of these sections are self-explanatory. The DESCRIPTION section usually contains almost everything found in most of the library documentation in other languages. It normally contains classes, functions/methods, and constants, organized in subsections.

A Critical Section

The section that I miss the in most other languages libraries is the SYNOPSIS. It provides an example usage of the module, which may be as short or long as the author thinks is useful for people to get a feel for how to use the module. This code is usually safe to copy into your project to use to get started.

For some gems or crates, I can find the equivalent by searching on the web, but that is often not ideal. Will the examples I find refer to this version of the library or an older one? How do I know that the person who wrote the example knew what he/she was talking about? How likely is it that the example shows how the author intended for the library to be used?

Having a synopsis or example in the module gives me some reason to believe positive answers for all of those questions. The synopsis is not guaranteed to be correct. Sometimes they fall out of sync with the code. Sometimes the author didn’t pick a good example. But, those kinds of documentation errors often result in a bug report or pull request in the Perl community.

But Wait, There’s More

In many of the larger or more powerful Perl module distributions, the authors commonly add more extensive documentation, including:

  • Getting Started/Quick Start Guide
  • Tutorial
  • Cookbook

If configuring the module or getting set up for your environment is complicated some form of Getting Started guide helps people get over the first hump. If a new user cannot get the module up and running, they aren’t going to use it.

If the module assumes a certain mental model or approach to solving problems, a tutorial is quite helpful. The goal here is to walk the user through some of the more common use cases and give them enough information to continue on their own.

If the module supports a large number of use cases or has a number of smaller pieces, a cookbook formatted document can help. This will focus on explaining how to solve certain tasks for the user using the module.

Unlike most libraries I’ve seen in other languages, this sort of information is supplied by the module author, directly as part of the module distribution. Very large frameworks in other languages may provide something similar (Rails for example), but it is does not appear to be a standard approach for libraries.

Improvement By Nudging

There are a number of testing modules in the Perl 5 ecosystem that focus on improving documentation. Mostly, they just make sure you have the appropriate sections in your main docs and that every public method/function has documentation. Despite their simplicity, they serve as a gentle reminder to add some minimal documentation.

Of course, there’s no way for a testing module to make certain you’ve written good documentation, and someone can certainly skip those test modules and the documentation they suggest. However, the slight nudge of the tests and the comparison with the other modules on CPAN serves as gentle peer pressure to help people improve their modules by keeping their documentation updated.

Some Good Examples

As a quick experiment, I decided to take a quick look at libraries from languages I don’t use as much: Python and JavaScript.

A quick look around the Python Package Index showed very inconsistent documentation, ranging from none at all, to really good. Quite a few Python libraries included an Example section, that served the same purpose as the Synopsis that I was wishing for above. There did not seem to be as much consistency of format between documentation, but the libraries that I saw that had documentation, seemed to do well.

For JavaScript, I looked at libraries on the NPM site. Once again, I saw varying quantity and quality of documentation. In the better documentation, there were examples in either a Usage or an Examples section.

Standing on the Shoulders

Like I said in the previous post, there is nothing preventing library authors in other languages from providing the level of documentation that is common for Perl 5 modules. In fact, this is easier than the testing infrastructure I described last time. All it takes to improve the documentation for your Ruby gem, Rust crate, or Java library, is to realize that your users need this information and write it. Writing good documentation isn’t easy, but the effort really helps your users.

The Python and JavaScript communities are definitely doing a much better job in this regard. They might also learn from Perl 5’s lessons on adding documentation analysis and reporting on their libraries. The communities should also focus on some gentle peer pressure to get library authors to do a better job on their documentation.

I know many people out there have the view that everything associated with Perl 5 is bad. But, the Perl community has done a great job with documentation. One of the biggest drivers for this was the community realizing years ago that not all module documentation was great and deciding to start encouraging module authors to improve. Other language communities would do well to learn from this experience.

Language Testing

Many of the language communities that I interact with now are fond of asserting that their particular community does a wonderful job of testing, documentation, or both. Most languages come with very good testing frameworks, and have libraries (or modules, gems, or crates) for even more. (Going forward I’m just going to use the term library, mentally replace with the term used in your language.)

Most of my experience with newer languages and their emphasis on testing involves Ruby and Rust, with a little Java and JavaScript on the side. These languages claim that testing is part of the culture of the language. The supporters of these languages point to testing support in the language. They point to documentation on testing the libraries associated with the language.

Some History

The proponents of some of these languages use their testing culture as one of the arguments why they are better than the languages that came before. To be fair, back when I was a C programmer, most of us did not do automated testing. The same was true when I was a Fortran programmer, and when I programmed in Forth.

One day, I began programming with a language that was very different than those languages. It did have a strong suite of tests for the language, although most of us users never saw it. The same framework that supported the language tests was available to library authors. The community encouraged library authors to provide a unit test suite with the library. The tooling around the package management system made certain the tests were run as part of installation to be certain the library would work in your environment.

A System to Learn From

The language was, of course, Perl 5. The CPAN repository stores the publicly accessible modules. If you were to browse around the modules, you will find that almost every one has a sizable test suite. Many of these tests are built on the venerable TAP system for tests. If Perl’s testing stopped there, other languages could shrug and move on.

However, the Perl 5 community was not finished, yet. Perl 5 was supported on a number of OSes and a large number of Perl 5 versions are always supported and in use. So, some volunteers in the Perl community built the CPAN-Testers system. Using processing time donated by the community and various companies, this service tests the modules on CPAN against every supported version of Perl 5, on every supported OS. Any person that wants to can set up a machine to run the testing code. Some people set up dedicated machines that run many different versions of Perl 5. Others use downtime on their machines to run the tests.

The result is a huge compatibility matrix that is available to help people who want to know if a module is supported on their OS/Perl version combination. As an added feature, when a module author uploads a new version of a module to CPAN, the system emails the user reporting whether or not the module passed its test suite across all of the supported Perl versions and OSes. It’s up to the author to decide if it is worth the effort needed to fix any incompatibilities.

But, once again, the Perl community would not leave well-enough alone. Even with good testing tools, it can be hard to get developers to add the testing if they don’t see a benefit. After evangelizing writing good test suites for a while, some volunteers in the community built the CPANTS system to provide a bit of gamification for good attributes the community would like to see in a module. They call their measure Kwalitee. It’s not the same as quality, but it’s kind of similar.

Other Languages

I would really like to see other languages learn some lessons from the Perl 5 ecosystem. A system that does basically continuous testing on all modules for all supported language versions on all supported OSes is an amazing resource. Not all module have OS dependencies, but some do. Being able to tell which language versions will and won’t work with your library allows you to explicitly decide when you want to drop support for an older version.

I know many people will scoff at the possibility of learning anything from Perl in any form. That would be a mistake. Perl 5 (the language and the community) is battle-hardened. Like a grizzled old soldier it has a lot to teach the young-uns. Not all of its lessons will apply to all languages, but it is pretty silly not to learn from the experience..

The Web Has Eaten Programming

Long ago when I began programming, most programs were standalone executables. Back in those days, almost all programs were run from a command line or executed by clicking on an icon on a GUI. This was before the web existed and most people didn’t even have dial-up access.

Things are very different today, with many applications having migrated to the web. I work in a Ruby on Rails shop at present, so I shouldn’t be surprised that most of the developers I work with immediately think of spinning up a website for any programming task.

In Standalone Command Line Tools, I talked about a command line tool and how some of my co-workers immediately talked about making a website out of it. In a team meeting last week, they began talking about a popular term in recruiting Full Stack Developer. It dawned on me that in their minds the whole world of software development boiled down to:

  • Back-end
  • Front-end
  • Full Stack (which covers both of the above)

In this model, there is no concept of programming outside of a web context. If I push on the concept a bit, some might admit that the operations people write some code in other tools, but they don’t see it as the same as programming.

Website vs Standalone Program

There are some fundamental differences between standalone code and web-based code. The most important difference to me is the intended audience. Web development is inherently aimed at a large group with unknown level of experience and skill. Standalone code generally has an audience of one or a small number of people. I’ve worked on commercial programs that had hundreds to thousands of users. That’s still small compared to many website applications.

Standalone programs can also be focused on smaller problems. This is part of the Unix philosophy: each tool should do one thing well. This means that a standalone program is easier for exploring small or focused ideas. I’m sure almost every programmer has had the experience of wanting to try out a small idea and getting bogged down in building the code around the idea and eventually letting the project die.

This also means that you are less likely to procrastinate by playing with the perfect theme or UI library. You can focus on exploring the idea, not the trappings. Many web development projects I’ve seen flounder at look-and-feel or responsiveness issues that are not the core of the project.

Make the Computer Work for You

One of the other important features of a standalone program is the idea of making the computer work for you instead of the other way around. There are a number of small problems that I have needed to solve over time that do not require a fancy UI. In fact, many of these tools will never be used by anyone but me. It may be cleaning up some data that I get once in a great while, or some maintenance task I run once a month.

None of these tools require a website. In fact, maintenance of the libraries and such around the actual work would swamp any work I’m ever likely to do on the code. If the tool were exposed as a website, I would need to focus much harder on security implications (which aren’t a big deal on a program I wrote for me to run on one computer). All of this extra work is unnecessary for the job I want to do: make the computer solve a problem for me. But, it is absolutely necessary if the tool is ever exposed to the web.

The idea of most standalone tools I’ve written over time is to either reduce the time a job takes me or to encode a process I will need to repeat so that I don’t have to remember the details. In other words, I want the computer to do the boring parts and leave the interesting stuff for me to do.

Web Development

It seems that the whole of programming for most of the developers I know is web development. If it doesn’t get hosted somewhere, and get thousands to millions of hits, what’s the point?

This isn’t to say that they have not done impressive work. Several of my co-workers have built tools that are available on-line and solve problems for many inside the company and out. However, I still see many of them doing repetitive tasks by hand without really thinking about automating them.

I remember when the web was pretty new, some of us would create standalone programs that opened a website on a local port to supply a UI. This was mostly to avoid the annoyance of working with the OS UI libraries, because the relatively simple UIs we needed did not require all of the OS support and overhead.

In the present time, no one would ever be willing to accept one of those simple UIs. You need to decide which CSS framework is the new hotness (or at least which one you won’t have to replace next month if you want your site to look current). You need to pick a JavaScript framework to give yourself the right responsive model. You need to make sure all of your libraries are up to date and there aren’t any known vulnerabilities. Don’t forget to pick where you will host the code (how much are you willing to spend to host this, anyway).


It has been said that “software has eaten the world”, now that processing power and code have gotten advanced enough to replace much special purpose hardware. It now also seems to me that the “web has eaten software development”.

Review of Clean Agile

Clean Agile: Back to Basics
Robert C. Martin
Prentice Hall, 2019

Bob Martin begins this book by describing it as his personal recollections, rather than a work of research. He starts by pointing out that agile software development started as a way of describing what had worked for some small development teams working on reasonably small projects. The book focuses mostly on the original intent of the people working on what would become agile.

In covering the topic, Martin ranges through the history and lessons of software development, describing both successful ideas and failures. A large part of what he seems to be doing is explaining the context around the development of the Manifesto for Agile Software Development. Many courses teach agile as a solution to all problems. This book spends the time needed to make sure you understand where agile came from so you can understand when and how to apply it.

Practices and Principles

The book describes the original practices and principles that made up agile and explains once again how and why they support agile development. The book is very much about effectively getting work done. Martin also describes how important the principles are in a world that is driven by software.

He explains the principles and how the original practices fit together to support the principles. He explains both low-level technical practices (like pairing and TDD), and values (like courage and communication). Martin does not lay out a set of commandments for how software should be developed. Instead, he explains the goals of software development and how agile practices and principles get us there effectively.

Other Opinions

One thing Martin did in this book that really surprised me was bring in people to write chapters that disagreed with his opinion. One of the important lessons of agile is realizing when you don’t know something and experimenting to fix that lack. Having chapters in his book that disagree with his opinions really follows the spirit of that agile lesson.


For me, one of the central concepts in the whole book is introduced relatively early:

Hope is the project killer.

Martin makes the very clear argument that we should run projects based on data, not on hope. As long as we

  • hope we understand the problem
  • hope we have a good solution
  • hope we are using the right technology
  • hope we have the right people
  • and, hope we can hit the deadline

we are pretty much guaranteed to fail.

He makes a solid argument that the goal of agile is to learn as fast as we can. We want to generate data about what we are doing. We want to make that data visible. This way we can manage the project, instead of hoping to succeed.


I would definitely recommend this book highly to any developer with more than a year or so of experience. It will be more useful once you get past the junior level tunnel vision on implementation and technology and begin looking at larger projects and realizing that tech itself is not enough.

For the part of the team doing implementation, the focus on practical techniques is a refreshing change from some current agile writing. For the people managing the process and team, the reminder of the original goals of agile serves as a check on the sometimes process-heavy forms of agile.

What Makes a Good Editor?

In a previous post, I spoke a bit about why to you might want to learn vim. Despite the fact that I prefer vim as my programming editor, I would never argue that everyone should use it. For some of people, vim meshes with the way they think. In my case, it wasn’t an immediate match, but as I came to understand the editor, it made more sense to me. For that reason, the features that I think make a good editor will be influenced by my choice of tool. However, most of what I think is critical is supported by almost every good programming editor.

Programming Editor

There are a number of types of programs that are available for entering and changing text. Not all of them are suitable for programming. A good programming editor must support features that support the work of programming: writing and editing code.

Programs are not just text, the structure of the program is usually as important as the text itself. Which means the features you will use to edit code are different than for working with normal text.

Major Features

Not all features for editing programs are the same. Let’s start with the more critical features. A decent programming editor would need to have a majority of the following:

  • language support
  • syntax highlighting
  • ability to quickly move around in a file
  • editing commands beyond insert, delete, copy, and paste.
  • automatic indention
  • search/replace with regular expressions

The more powerful editors have all of the above features and more. Some more powerful features would include:

  • tools to reduce repetition
    • ability to repeat a command
    • ability to repeat a series of commands
    • ability to script functionality
  • replaceable snippets
  • ability to quickly re-arrange code
  • ability to interface with other programming tools
    • compiler/interpreter
    • version control
    • static analysis tools
    • testing tools
    • tagging tools
    • documentation support

Other programming editors have more specialized features, such as

  • code folding
  • refactoring support
  • code completions (deep knowledge of particular languages)
  • ability to run tools in the background/continuously
  • support for some form of plugins to extend functionality

Some Programming Editors

Many tools meet these criteria. For those who like vim, there’s also vi, neovim, and spacevim. For fans of emacs, in addition to GNU Emacs, there is also spacemacs. I know programmers that swear by TextMate, Sublime, and vscode. For tools that are tied more closely to a language you can go with Eclipse or IntelliJ.

All of these editors support some large subset of the features listed above and other features besides. None of them will make you a better programmer immediately, but they all provide the tools to help you program effectively.

Learning Your Editor

Once you’ve chosen your editor, you need to spend time learning how to make it your tool. You need to master its features. You don’t have to learn all of the features immediately (or ever). The more you learn and practice features, the more effective you will be with the editor.

As you learn features, you will find actions that once took several steps can be performed with a single keystroke or command. You will find that powerful commands can be modified so that they only work in certain areas of the code (changing one function instead of a whole file). Some of these features you will use daily, others more rarely. Each new command you learn makes you a more effective programmer.


A programming editor has a large number of powerful commands and features. No one uses all of them all of the time. The reality is that most of these commands will not be useful to you right away. You will need to try new commands and practice them until they become muscle memory. You want to be able to think make the change and have you fingers do it, rather than try to remember how to make a change.

In many ways, this is similar to working in a high-level language rather than assembler. Instead of wasting mental energy remembering how to make the change happen, we can just focus on the change itself.

Over the time I have been using vim, I have had several instances where I learned a command 3 or 4 times before it actually stuck. The first few times I just didn’t need the feature often enough to make it part of my toolbox. In many cases, a different project or a different workflow suddenly made a command I had found unnecessary to be exactly what I needed to solve a problem.


Being a great programmer is not a matter of picking the right editor. But not using a programming editor can be seen as a drag on your abilities. You will spend more time working on the mechanics of making changes, instead of spending that time on thinking about the changes. A good programming editor becomes a power tool that can let you perform more with less mental effort.

Standalone Command Line Tools

In a recent conversation with some co-workers, I described a set of command line scripts I had put together over the last few years to automate some annoying tasks I’ve needed to do over at different times.

None of what I did was particularly hard or clever, in fact most of the tools involved querying internal services, manipulating the output, and displaying it on the screen. Since the services I was querying had multiple servers, I had automated the annoying part of tracking down the names of all of the servers and iterating over them.

I was surprised to find that some of the senior people on the team immediately thought the best thing to do would be to build this functionality into each of the services.

The Framework Trap

Part of the issue here comes from people that have only worked in a programming framework of some kind. The framework becomes a kind of Golden Hammer.

The problem is that any framework comes with a certain amount of overhead (computer resources or cognitive load). If the framework actually fits the needs of the problem, that’s a reasonable trade-off. If it doesn’t, then the framework will often do more harm than good.

Crafting a Standalone Tool

The process for developing a quick, standalone tool is very different from working in a framework. When working with a framework, you

  • need to determine that there is an application that you want to write,
  • create a project using the framework (or identify a project you want to add the functionality to)
  • identify the features of the framework that you will make use of

Now you can get down to the business of building the tool.

The standalone approach was easier (for me, at least).

All of the servers I needed to access had an endpoint at a standard location that returned information on the running server. The output in each case was a blob of JSON.

I started by using curl to call one of the servers to retrieve the status information. After verifying that I could manage that, I passed the output through jq to extract relevant information and format it.

curl | jq -c '.'

The next time I needed to do this, I decided to run the command across multiple servers. I used a bash for loop to execute the curl command once for each server.

for i in 1 2 3 4 5; do
   curl https://service$ | jq -c '.'

At a later date, I found we had an service that could give me a list of the servers running a particular application. I used that to make my command more robust. (I’ve wrapped up the code for extracting the list of servers in the list_servers script to keep it out of the way.)

for h in $(list_servers service); do
    curl "https://$h/status" | jq -c '.'

I decided that it would be nice to use this for multiple services and to be able to choose the query to pass to jq. This made it worthwhile to make a bash function out of it. I also added a little bit error checking.

function ping_servers() {
    if [ -z "$1" ]; then
        echo "Missing application name"
    for h in $(list_servers "$app"); do
        echo -n "$h: "
        curl "https://$h/status" | jq -c '.'

This version lasted me a a year or two before finding out about another quirk of our environment that made it worthwhile to expand this to it’s own bash script. The details of that script are not necessary to further discussion.

The Process

When I started this tool, I had very little idea where I would need to go with it. I didn’t know about other services that would make the result more robust and complete. And, to be honest, the first couple of versions came about while fighting fires. I needed to automate a task quickly, and did not have time to do a project.

If I had just put the work aside until I could build a project from the framework, I would have never gotten it built. On the other hand, the quick command line tool was simple enough that the first version worked first time. Each iteration came during support work, where modifying the tool was really tangential to the problem I needed to solve.

The end result is a tool that I probably use at least once a week. I’ve duplicated and modified the script to work in environments that are different than the original servers.


The point for me of this exercise was not to build the perfect, prettiest tool to display this information. The goal was to solve my problem of the moment, with the least amount of effort. Quick command line tools are really good for that. More importantly, I did not need a pretty UI or support for every kind of user, I needed something to make me more effective, quickly.

Too many programmers, in my experience, forget that making themselves more productive is also important. We shouldn’t spend all of our time writing tools for ourselves, but if we don’t spend some time doing it, no one else will.

Object Oriented Programming Considered Harmful?

I realize that the title of this post is likely to generate a lot of noise, but it’s a subject I have been thinking about for much of the last two decades. Over ten years ago, I wrote a post questioning the everything is an object approach several languages seemed to be taking. That post came out of a series of articles about programmers latching on to a new paradigm and ignoring all knowledge that came before. (See my Programming Paradigms series to get a feel for some of my early thoughts.)

This topic was brought to mind again when I stumbled across a couple of videos by Brian Will from 2016: Object-Oriented Programming is Bad and Object-Oriented Programming is Embarrassing: 4 Short Examples. Although the titles of the videos seem geared to really upset some people, many of the points he makes are good ones. If you have never programmed in anything but the Object Oriented paradigm, you owe it to yourself to watch what he has to say and consider his points.

One thing worth noting is that Brian Will does not completely throw out all of Object Oriented programming. He does point out some places where objects might be useful. But, given that objects are the dominant paradigm, the only way to get people to consider scaling back is to push back pretty hard. I don’t expect Will’s videos to completely change anyone’s understanding of programming, but they do provide a counter-point to current orthodoxy.

If you’ve ever had a sneaking suspicion that objects don’t solve every problem, these videos might help you solidify that thought. If you are absolutely convinced that objects are the only way to program, these videos might make you consider a different viewpoint.

Why Learn Complex Tools?

I recently ran across the article Why use Vim: Forget easy-to-use design. Choose something hard instead — Quartz. This article suggests that you should learn to use the vim editor because it’s hard.

Although I do think most developers would benefit from using an editor like vim, I feel like the because it’s hard advice is focusing on the wrong thing. I can think of a large number of things that are hard to do that I would never advise someone to do because the benefits don’t exceed the difficulty.

Why is vim Hard?

Let’s start with a question. Why do many people find vim to be hard? Most people answer with some combination of:

  • Not user-friendly
  • Not intuitive
  • Modal
  • Minimal (no) use of the mouse
  • Cryptic commands
  • Does not work like whichever editor you used first

Many of these complaints boil down to the same thing. vim does not approach editing text the way you expect (unless you are a vim/vi user). This mismatch between your mental model of how text is edited and how vim approaches text editing makes everything seem harder. Most editors focus on entering text, whereas vim focuses on changing text. Really understanding vim’s mental model makes many of the design choices make much more sense. By focusing on changing text, vim becomes an extremely powerful tool in the hands of someone who understands it.

What Makes vim (and Other Programming Editors) Power Tools?

The mental model of vim is part of what makes it a power tool. To support this model, the editor has a set of important features:

  • Commands for changing text (cut/yank, delete, paste, change, indent/outdent, change case)
  • Powerful ways for specifying the text objects to change
  • Search and replace functionality with regular expressions
  • Support for automating repetition
  • Scriptability

These features are specifically focused on the ability to edit text that already exists. Over the life of your typical project, you will spend much more time changing text than you will entering text into a new file. vim optimizes for that workflow.

There are other features that any powerful editor should have, that don’t relate directly to vim’s mental model.

  • Syntax highlighting
  • Support for multiple languages
  • Ability to run programs on the code
  • Extension through plugins
  • Configurable interface and commands
  • Templating or snippets support

Tying all of these together is a focus on reducing the rote or repetitive portions of writing code.

Should You Learn/Use vim?

I find vim very effective for the work I do. But, that does not mean that I would argue that everyone (or even every programmer) should use vim. I use vim because it is a power tool. Power tools are harder to learn than simple tools, but they multiply your abilities tremendously. Any editor that extends your abilities could help you become a more effective developer. I have several good friends that are emacs users. Other than a little friendly ribbing, we don’t really try to convert each other or really care which tool we use. Both are power tools that serve similar purposes. I know people who are very effective with Sublime or TextMate. If it makes them effective, I really don’t have an opinion on what tool you use.

Personally, I also think that challenging your mental model of editing is an effective way to level up in your ability, even if you don’t use vim every day (or ever again). Challenging your way of solving problems, and really learning a new way opens up new ways to think about problems. That is an important advantage when it comes to programming.

Power Tools

The important thing about any power tool is not that it is hard to learn. The important thing is that it multiplies your effectiveness. It’s safe to say that any tool that is easy enough that you can learn everything about it the first time you use it is probably not a power tool. Those simple tools may add to your effectiveness, but it definitely won’t multiply it.