Standalone Command Line Tools

By | February 24, 2020

In a recent conversation with some co-workers, I described a set of command line scripts I had put together over the last few years to automate some annoying tasks I’ve needed to do over at different times.

None of what I did was particularly hard or clever, in fact most of the tools involved querying internal services, manipulating the output, and displaying it on the screen. Since the services I was querying had multiple servers, I had automated the annoying part of tracking down the names of all of the servers and iterating over them.

I was surprised to find that some of the senior people on the team immediately thought the best thing to do would be to build this functionality into each of the services.

The Framework Trap

Part of the issue here comes from people that have only worked in a programming framework of some kind. The framework becomes a kind of Golden Hammer.

The problem is that any framework comes with a certain amount of overhead (computer resources or cognitive load). If the framework actually fits the needs of the problem, that’s a reasonable trade-off. If it doesn’t, then the framework will often do more harm than good.

Crafting a Standalone Tool

The process for developing a quick, standalone tool is very different from working in a framework. When working with a framework, you

  • need to determine that there is an application that you want to write,
  • create a project using the framework (or identify a project you want to add the functionality to)
  • identify the features of the framework that you will make use of

Now you can get down to the business of building the tool.

The standalone approach was easier (for me, at least).

All of the servers I needed to access had an endpoint at a standard location that returned information on the running server. The output in each case was a blob of JSON.

I started by using curl to call one of the servers to retrieve the status information. After verifying that I could manage that, I passed the output through jq to extract relevant information and format it.

curl https://service1.example.com/status | jq -c '.'

The next time I needed to do this, I decided to run the command across multiple servers. I used a bash for loop to execute the curl command once for each server.

for i in 1 2 3 4 5; do
   curl https://service$i.example.com/status | jq -c '.'
done

At a later date, I found we had an service that could give me a list of the servers running a particular application. I used that to make my command more robust. (I’ve wrapped up the code for extracting the list of servers in the list_servers script to keep it out of the way.)

for h in $(list_servers service); do
    curl "https://$h/status" | jq -c '.'
done

I decided that it would be nice to use this for multiple services and to be able to choose the query to pass to jq. This made it worthwhile to make a bash function out of it. I also added a little bit error checking.

function ping_servers() {
    if [ -z "$1" ]; then
        echo "Missing application name"
        return
    fi
    app=$1
    expr=${2-.}
    for h in $(list_servers "$app"); do
        echo -n "$h: "
        curl "https://$h/status" | jq -c "$expr"
    done
}

This version lasted me a a year or two before finding out about another quirk of our environment that made it worthwhile to expand this to it’s own bash script. The details of that script are not necessary to further discussion.

The Process

When I started this tool, I had very little idea where I would need to go with it. I didn’t know about other services that would make the result more robust and complete. And, to be honest, the first couple of versions came about while fighting fires. I needed to automate a task quickly, and did not have time to do a project.

If I had just put the work aside until I could build a project from the framework, I would have never gotten it built. On the other hand, the quick command line tool was simple enough that the first version worked first time. Each iteration came during support work, where modifying the tool was really tangential to the problem I needed to solve.

The end result is a tool that I probably use at least once a week. I’ve duplicated and modified the script to work in environments that are different than the original servers.

Conclusion

The point for me of this exercise was not to build the perfect, prettiest tool to display this information. The goal was to solve my problem of the moment, with the least amount of effort. Quick command line tools are really good for that. More importantly, I did not need a pretty UI or support for every kind of user, I needed something to make me more effective, quickly.

Too many programmers, in my experience, forget that making themselves more productive is also important. We shouldn’t spend all of our time writing tools for ourselves, but if we don’t spend some time doing it, no one else will.

Leave a Reply

Your email address will not be published. Required fields are marked *