Object Oriented Programming Considered Harmful?

I realize that the title of this post is likely to generate a lot of noise, but it’s a subject I have been thinking about for much of the last two decades. Over ten years ago, I wrote a post questioning the everything is an object approach several languages seemed to be taking. That post came out of a series of articles about programmers latching on to a new paradigm and ignoring all knowledge that came before. (See my Programming Paradigms series to get a feel for some of my early thoughts.)

This topic was brought to mind again when I stumbled across a couple of videos by Brian Will from 2016: Object-Oriented Programming is Bad and Object-Oriented Programming is Embarrassing: 4 Short Examples. Although the titles of the videos seem geared to really upset some people, many of the points he makes are good ones. If you have never programmed in anything but the Object Oriented paradigm, you owe it to yourself to watch what he has to say and consider his points.

One thing worth noting is that Brian Will does not completely throw out all of Object Oriented programming. He does point out some places where objects might be useful. But, given that objects are the dominant paradigm, the only way to get people to consider scaling back is to push back pretty hard. I don’t expect Will’s videos to completely change anyone’s understanding of programming, but they do provide a counter-point to current orthodoxy.

If you’ve ever had a sneaking suspicion that objects don’t solve every problem, these videos might help you solidify that thought. If you are absolutely convinced that objects are the only way to program, these videos might make you consider a different viewpoint.

Why Learn Complex Tools?

I recently ran across the article Why use Vim: Forget easy-to-use design. Choose something hard instead — Quartz. This article suggests that you should learn to use the vim editor because it’s hard.

Although I do think most developers would benefit from using an editor like vim, I feel like the because it’s hard advice is focusing on the wrong thing. I can think of a large number of things that are hard to do that I would never advise someone to do because the benefits don’t exceed the difficulty.

Why is vim Hard?

Let’s start with a question. Why do many people find vim to be hard? Most people answer with some combination of:

  • Not user-friendly
  • Not intuitive
  • Modal
  • Minimal (no) use of the mouse
  • Cryptic commands
  • Does not work like whichever editor you used first

Many of these complaints boil down to the same thing. vim does not approach editing text the way you expect (unless you are a vim/vi user). This mismatch between your mental model of how text is edited and how vim approaches text editing makes everything seem harder. Most editors focus on entering text, whereas vim focuses on changing text. Really understanding vim’s mental model makes many of the design choices make much more sense. By focusing on changing text, vim becomes an extremely powerful tool in the hands of someone who understands it.

What Makes vim (and Other Programming Editors) Power Tools?

The mental model of vim is part of what makes it a power tool. To support this model, the editor has a set of important features:

  • Commands for changing text (cut/yank, delete, paste, change, indent/outdent, change case)
  • Powerful ways for specifying the text objects to change
  • Search and replace functionality with regular expressions
  • Support for automating repetition
  • Scriptability

These features are specifically focused on the ability to edit text that already exists. Over the life of your typical project, you will spend much more time changing text than you will entering text into a new file. vim optimizes for that workflow.

There are other features that any powerful editor should have, that don’t relate directly to vim’s mental model.

  • Syntax highlighting
  • Support for multiple languages
  • Ability to run programs on the code
  • Extension through plugins
  • Configurable interface and commands
  • Templating or snippets support

Tying all of these together is a focus on reducing the rote or repetitive portions of writing code.

Should You Learn/Use vim?

I find vim very effective for the work I do. But, that does not mean that I would argue that everyone (or even every programmer) should use vim. I use vim because it is a power tool. Power tools are harder to learn than simple tools, but they multiply your abilities tremendously. Any editor that extends your abilities could help you become a more effective developer. I have several good friends that are emacs users. Other than a little friendly ribbing, we don’t really try to convert each other or really care which tool we use. Both are power tools that serve similar purposes. I know people who are very effective with Sublime or TextMate. If it makes them effective, I really don’t have an opinion on what tool you use.

Personally, I also think that challenging your mental model of editing is an effective way to level up in your ability, even if you don’t use vim every day (or ever again). Challenging your way of solving problems, and really learning a new way opens up new ways to think about problems. That is an important advantage when it comes to programming.

Power Tools

The important thing about any power tool is not that it is hard to learn. The important thing is that it multiplies your effectiveness. It’s safe to say that any tool that is easy enough that you can learn everything about it the first time you use it is probably not a power tool. Those simple tools may add to your effectiveness, but it definitely won’t multiply it.

Proficiency Revisited

Long ago, I posted a short note On Proficiency where I noted four things I believed were needed to become proficient at an activity: aptitude, interest, knowledge, and skills.

My experience learning software development took a different route than many developers today. I started development in very small teams (1-2 developers). I had actually been working with a larger group at the time, in an unusual language. As a result, we tended have to train people from scratch to work on our system. From this experience, I didn’t have many preconceived notions of who could or could not do development.

I was later surprised to have some communities decide ahead of time who could or could not have an aptitude for programming based on their gender or race. Of course, I was aware that many older programmers were white males. But, that was true in most fields because white males had dominated most fields for decades, so there were obviously more of them. I may have been naive, but I saw this as an opportunity problem rather than an aptitude issue. I hadn’t seen any race or gender correlation in the people I had learned from and trained. I saw talented programmers that were of many different races and genders. Empirical evidence trumped bias in my mind.

In my original article, I used a physical sports metaphor for explaining the aptitude portion of proficiency. That actually fails for programming. Worse, it gives the impression that aptitude is physically obvious. I’ve seen many different people learn to program, and the aptitude portion depends on how their brains work. No one can see that from the outside, despite the fact that many believe they can. Unfortunately, generalization and stereotyping also play into this issue.

The only way to see if people have an aptitude for programming is to allow them to try to program. Trying to guess someone’s aptitude by looking at them (or their resume) is doomed to fail. Trying to do otherwise has almost certainly held back talented programmers that could have done some great work.

“Learning by Doing” is Not Enough

I’ve spent a fair portion of my career teaching and mentoring other programmers. Some of this training involves covering basic programming and design concepts that they may not have learned previously. When covering a fundamental concept, I often defer to books written by people much smarter than I am rather than explain the concept myself. Sometimes, the person I’m training has balked at reading any material because they prefer to learn by doing.

I am quite aware that people learn some things better by actually doing them. I started my career as a self-taught programmer. I learned programming by writing code for hours. I quickly realized that I needed more if I was going to get really good at this. So, I began reading programming books.

Experience

There is a problem with only learning by doing. You can certainly learn from your own mistakes, and you are likely to retain the lesson if you made the mistake yourself. Even though programming is a relatively young field, there are still decades of experience out there, and there is no way that you can rediscover all of that knowledge by trial and error. In other words, you need to also learn from the experience of others. No matter how prolific you are at doing, there is no way to match the experience of everyone who has written about programming since the 50s.

Knowledge is part of the tool set you use for writing code. Only using learning by doing is like building a house only using tools (and materials) you made yourself. How long would it take to build even a doghouse if you had to develop a hammer from scratch first?

Old Knowledge

The next problem I have with the learn by doing mantra is the (implicit) assumption that all of the knowledge that came before is old and useless. I’ve been told flat out by some developers:

  • There’s no need to pay attention to size of data, we have virtual memory.
  • There’s no need to think about managing memory, because we have garbage collection.
  • There’s no need to understand standalone programs, because we do everything on the web.
  • There’s no need to pay attention to how things work in C, we use higher-level languages.
  • There’s no need to pay attention to primitive numeric types, our language converts numbers as needed.
  • I don’t need to know how a hash/dictionary/associative array works, it’s constant time lookup.

I’ve seen every one of these things blow up on some programmer sometime in the last few years.

Some of our fundamental algorithms was developed when resources were much more constrained. This helps to understand the trade-offs made in those algorithms. It also becomes helpful when understanding a program that as suddenly run out of an infinite resource. I have sometimes solved problems based on an explanation of a technique from the days when main storage was tape.

Many developers are fond of tossing off the line Don’t reinvent the wheel when someone proposes recreating something they already know about. Unfortunately, I’ve also seen these same people re-discover ideas from the 70s, 80s, or even 90s. Many developers don’t have a good grasp of long-understood fundamentals, because they have only learned what they, personally, have coded. I watch programmers continue to make mistakes that I read about more than two decades ago, in books that were already a decade old.

Conclusion

Learning by doing is not a bad thing. In fact, I’d be comfortable with saying that you can not learn to program without writing any code. But, you will also never master programming without learning by studying what has been done by others. This is really no different than any other discipline.

Another Side of Good Naming

In previous posts, I’ve talked about the importance of naming to make it easier for whoever maintains the code. One thing I’ve left out is the importance of not misusing standard names.

Naming Fail

In a previous position, I was manipulating a collection of data, when I hit a bug. After running some new code, the collection was a couple of items shorter than it had been. After quite a bit of troubleshooting, I tracked the problem to a method on the collection object named sort. I was somewhat surprised to find that this method removed items from the container while sorting. In fact, this was such a surprising result, that I didn’t even check for it during most of the troubleshooting. What I discovered was that sort didn’t just sort the collection, it also removed duplicates.

Unfortunately, this violates the Principle of Least Surprise. Every (other) sort algorithm I have ever seen has taken a list or array and returned another with the following properties:

  • Contained only elements from the original
  • Contains all elements from the original
  • Are ordered based on the supplied comparison

Since generating a unique sorted list is useful, many libraries have some form of uniq method. In a few cases, you might find a combined method or an option that can be supplied to the sort function. (An example would be the -u options for the Unix sort utility.)

Better Name

In the example, the method did exactly what it was designed to do. The original programmer even argued that we were never going to want the container to have duplicates in this system. So having a sort that didn’t remove duplicates was useless. (I also argued that we should have avoided inserting duplicates into the container, if there should never be duplicates.) Although he had a point, the name is still wrong. The method should have been called sort_and_uniq() or unique_sort() or even sort_u(). Any other name would have served as a warning to any maintainer that the method does more than just sort.

Using the name sort in that case is a bad name, because it violates the expectations of the maintenance programmers.

Incident Handling

No matter how careful your testing, no matter how complete your review, there is a non-zero probability that releasing your new code will expose a problem. The last phase of risk management is handling a risk that actually manifests.

At the end of the last post, I mentioned the importance of post-release testing and an ability to roll-back changes from the code for mitigating risk. This covers a particular set of circumstances:

  • an easy way to detect the problem
  • quick ability to turn off or back out changes
  • problem occurs a short time after the deploy
  • problem is not catastrophic

If the event is covered by this set of circumstances, and you catch the problem in time, and you back out the problem code, the impact is pretty small.

An Incident

What happens if these conditions aren’t met? Sooner or later every system has an incident. Either we did not protect against the risks we knew of well enough, a risk we hadn’t considered bit us, or a risk we had thought of turned out to have more impact. However it happened, the system is down or degraded to the point that customers/users notice. Now what do you do?

Part of risk mitigation is intelligently dealing with an incident. To do that, you need a plan. Dealing with an incident happens in three parts:

  • restore functionality
  • analyze the incident
  • prepare for the next incident

Restore Functionality

While dealing with the incident, we need to troubleshoot the symptoms and restore functionality quickly. On the other hand, we need to preserve information that can be used to analyze the incident as much as possible (this might require saving logs from systems before we rebuild them, or taking snapshots of critical information before restarting the system). It can be useful to have someone who is not actually working on the recovery to take notes so that we have a record of what was done to recover. We will also need a system for communicating among all of the people working on the recovery. This could be everyone in one room, a chat session that everyone uses to communicate, a conference call, or a video conference that everyone joins. The mechanism matters less than the fact that there is one place to update the team.

In the best case, the system is restored by rolling back the last change or restarting a server. In these cases, only one or two people are needed to do the actual work. All information about the problem and the steps we performed to recover should still be carefully documented. It’s also a good idea to make sure that at least one other person is watching as changes are made. Having two people agree to a change makes mistakes in recovery slightly less likely.

Sometimes, no one has any idea why the problem has occurred or which symptoms actually matter. In this case, we may have multiple people investigating different parts of the system or different symptoms in an attempt to find the problem. It’s still a good idea to work in pairs and to keep the whole group apprised of any changes before you make them. In these cases, the notes will be even more important. During the various tests or experiments, we may not be positive which change solved the problem.

Shortly after the functionality is restored, we need to make certain:

  • all of the relevant symptoms are documented
  • all steps we took to resolve the issue are documented
  • all of the artifacts (that we did save) are archived someplace safe

These will be needed for the next stage.

Analysis

The next stage is as important as restoring functionality, even though people often skip it. Some short time after the incident (within days), someone (preferably several people) needs to take all of the information that gathered during the incident to attempt to determine the cause(s) of the incident. This analysis should focus on what actually went wrong, how we could have prevented it, and how we will prevent it in the future. The goal of this process is to identify things we can do as part of development and deployment to reduce risks in the future.

Depending on your organization, you may do a formal FMEA or RCA. Or you might just have your team examine the evidence they collected during the incident and brainstorm some ways to prevent the incident from happening again.

Prepare

The analysis from the last section should result in some actions we can take. Some will be procedures we can put in place for better testing or code review. These may take the form of checklists or static analysis tools. We could schedule special training to spot certain kinds of errors. We might improve our testing to reduce the chances of something like this slipping through again.

Another potential area for change would be better monitoring of the production system with alerting to recognize the problems sooner. We might increase logging to allow us to troubleshoot more quickly.

The final area for actions would be the development of procedures and scripts that can be used to recover from a problem like this one as rapidly and safely as possible.

Conclusion

As long as there is change in a system or its environment, there will be risks. Some of those risks will result in an incident. We need to strive to learn from each incident to prepare for similar risks in the future.

Mitigating Risk

You’ve carefully assessed the risks in your new system. You’ve considered the implementation and eliminated bad implementation decisions and removed unnecessary features that involved extra risk. Considering the result, you note that there is still risk. So, what do you do? You try to mitigate the remaining risk.

Risk mitigation does not remove the risk itself, but attempts to:

  • Reduce the likelihood of the risk occurring
  • Reduce the impact of the risk, if it happens

If you think back to the risk assessment post, these were the two main aspects of the quantitative risk assessment. Looking at risk in this way gives a framework for thinking about mitigating risk.

Reduce Likelihood of Occurrence

One way to reduce risk is simply to reduce the chance that a particular problem will occur. The two approaches people normally use to reduce the likelihood of a bug occurring are:

  • try to detect bugs as quickly as possible
  • try to prevent putting them in

Since everyone makes mistakes, you need some system to catch mistakes before they go live. There have been a number of approaches for having people check each other’s code. We began with code review. Early approaches were very formal and heavy. Only very large development groups could manage one. More light-weight versions of peer review followed. Since programmers quickly noticed that some mistakes recurred quite a bit, they soon began to build Code Analysis tools looking for dangerous or unwise practices.

The eXtreme Programming (XP) methodology first suggested turning development best practices up to eleven. In the original books, code review was turned continuous by using Pair Programming. Testing your code was made extreme by using Test-Driven Development (TDD). Later, it became clear that TDD more of a design approach than a testing approach. All of these practices provide ways to reduce the number of bugs that make it through the development process.

One fairly old practice was manual testing of the code. This was often effective, but it is hard to reproduce. It also becomes less useful for multiple versions of software, because people don’t do exactly the same steps each time. Obviously, the solution to repeatable tests is to automated tests. This allows us to be sure that changes to the code do not break old functionality. Once you have an effective set of automated tests, it becomes tempting to try to run them all the time (or at least every time code is committed to the repository). This became the practice of Continuous Integration (CI).

Over time, we have found that all of these practices contribute to overall code quality, and none completely overlaps the others.

Reduce Impact

There are several ways that you can reduce the impact of risky changes. Making smaller changes where possible can (under some circumstances) reduce the impact of this part of the change (even if the whole change still has the same risk). Staged deploy/release of code can allow us to test the change in a near-production environment, catching problems before they affect production. You can also use a partial release or A/B testing to expose a limited number of users to the change. With a robust system for rolling back changes (either through feature flags or a blue/green release), we can quickly back out a change as soon as we detect a problem. That approach works particularly well if you have a robust post-deploy test to verify any changes.

Some kinds of change lend themselves to shadow testing, where you run new code along side the old code and compare the results to be sure that they remain consistent. In the beginning, you would run the new code in parallel, compare with the old code, and continue to use the old results. As confidence increases, you would switch over to the new code.

A well designed logging system allows monitoring the behavior of any changes and captures information that can be used to recognize and troubleshoot any particular problems.

Conclusion

None of these approaches really removes any risk. They may mitigate risk by spreading it out, limiting the requests that are impacted, or making it easy to recognize a problem and back out.

Eliminating Some Risk

Many people doing risk management assert that you cannot eliminate risk. Those people are partly right. If you are willing to modify the functionality of a program or system, you can eliminate some kinds of risk, Obviously, this is much easier early in the design or implementation of a system.

Trade-offs to Eliminate Risk

Part of risk management is comparing the benefit of a features against the risks caused by their implementation. If a feature is risky, but critical to the function of the system, we obviously can’t eliminate it. On the other hand, a nice-to-have feature could be eliminated to reduce risk. Another approach is to reduce the accidental risk of a feature by changing its implementation. A medical system could use a randomly generated ID number instead of a patient’s social security number as a unique identifier. If you don’t have sensitive information, you cannot leak it.

Functionality Trade-off

One really easy to understand trade-off is how an e-commerce site handles credit card information. Assume that you are using a secure connection (https) to transfer this information, otherwise the risks are much larger. Let’s propose three scenarios.

Scenario 1: Save Credit Card Information

One approach is storing credit card information including CCV number with user account information.

The benefits are mostly convenience and ability to up-sell.

  • Handle later purchase without asking for full information
  • Reduced barrier to purchases through convenience
  • Easy handling of refunds

Most of the downsides are unexpected and outside of your main business. The downsides from these risks could range from embarrassment to legal liability.

  • Attackers steal credit card information from you
  • A Coding mistake causes extra charges on credit cards you hold
  • An insider steals card information or uses it to make purchases
  • Accidentally revealing credit card information through logs, backups, or screen display
  • Authentication mistake revealing credit card information from a different account
  • Loss of revenue from customers that don’t want to give out credit card to yet another site

Scenario 2: Discard After Use

Another approach is using the information to immediately charge the customer and then discard all credit card information.

The sales and convenience benefits are much less in this approach, but the risks go down.

  • An attack on the system cannot release previous customer information
  • Charges to customer happen at the time of purchase
  • Coding mistake or malicious insider cannot compromise all customers

Most of the downsides are unexpected and outside of your main business. These downsides from these risks could range from embarrassment to legal liability.

  • Subsequent orders require re-entering credit card information
  • Refunds become more awkward
  • Potential exposure of credit card information during processing
  • Potential delay of a sale due to problem with card processing
  • Loss of revenue from customers that don’t want to give out credit card to yet another site

Scenario 3: Off-load Payment Processing

A final approach is pushing the risks of handing credit card information to a third party. Many e-commerce sites use PayPal for this purpose. The assumption is that the other party is better able to deal with the risk, due to their focus on that as their core business.

  • No direct risk dealing with credit card information
  • Attack on the system cannot release any customer information
  • Charges to customer are immediate
  • Coding mistake or malicious insider cannot compromise all customers

Most of the downsides are outside of your main business. The downsides from these risks could range from embarrassment to legal liability.

  • External processing system has payment information
  • Loss of revenue from customers that don’t want to use the suggested payment processor

Conclusion

The important thing to note is that there is no best scenario. Instead, you need to be aware of both the risks and benefits to make appropriate trade-off for the business.

Inherent Risk versus Accidental Risk

One way to think about risk is to decide if a feature is naturally risky, or if the risk is a result of the implementation of the feature. Stealing some terminology from the description of software complexity, I call these Inherent Risk and Accidental Risk.

Risk Creep

Sometimes unnecessary risk creeps into a system through implementation details: saving private information for troubleshooting purposes, saving financial information for convenience, accessing private data unnecessarily. Many of these kinds of risk come from wouldn’t it be interesting if we…-type brainstorming. They sometimes uncover very important use cases that could mean the difference between a successful system and a flop. But, much of the time, the result is unacknowledged risk.

Risk of Implementations

Most of the time, people either don’t acknowledge the risk of these features, or don’t explore less risky implementions.

Many physical stores use a phone number or email address as a unique identifier for rewards programs. This has the added benefit of giving them a way to market to you. However, they are now responsible for keeping this information private and secure. It is probably not as risky as credit card information, but simpler personal information could still be damaging/embarrassing. In the past, these companies used loyalty cards, which had a unique id attached. By turning to personal information, each store is now at risk for personal information leakage.

The feature is the ability to recognize an individual customer. The implementation determines the level of risk of the feature.

Password Risk

Probably the classic example for accidental risk is storing passwords for login. Some systems take the simple approach of storing the username and password in a database. When the user logs in, the system compares the username and password with the entries in the database and quickly determine if the user is known. Most developers know by now that we should not be storing the passwords in plain-text in the database. If someone manages to get access to the database, they can impersonate anyone.

The key insight is that we need to verify that the user knows their password. We don’t actually need the plain-text password to do that. Hashing the password allows us to test the supplied password against the stored hash and identify if the user has access. This requires slightly more complex code, but we have reduced the accidental risk of the implementation.

Username Logging

One of my favorite examples of accidental risk comes from logging failed login attempts. You could do this one of three ways:

  1. Log the invalid username and password pair
  2. Log the username for the failed attempt
  3. Log the fact that there is a failed attempt

In the first case, the accidental risk comes from the fact that you might end up capturing a username and password that is wrong because of a simple typo. This means the log contains information that would greatly simplify an attack on the account.

The second case sounds like a good compromise except for the accidental risk of someone entering their password in the username field. This might happen from fumbling a tab (to go to the next field), or an overzealous enter after the username (which I’ve seen submit the login and then restore the form where you type the password into the first field). So, once again, the log can be used to simplify a login attack.

The final case stores only what we need to know, a failure occurred. This gives us useful information with minimal risk.

Types of Risk Assessment

When developing software, there are many different dimensions of risk you might consider. There is an enormous amount of information available on Risk Assessment, consider this to just quickly skimming over the topic. Depending on your understanding of risk (or your paranoia), trying to assess risk can be a difficult task. There are several approaches that people use to assess risk. Most of the names for these approaches are not official, they are just what I think of when trying to describe what I have seen.

Ignore and Hope

Many people and companies use this approach. They might say No one is going to attack us, we’re not a big enough target. Or, all of our users know this is just a hobby project and they shouldn’t trust us with real data. If you are working on anything more than quick script that only you run, on your own system, you probably have more risk than you think.

Gut Check

The next level of risk assessment is a simple exercise in What could go wrong? Some experienced developers have developed a feel for changes or features that make them nervous. Those things are definitely more risky. They often ignore risks until they see something that triggers those concerns. Examples might include:

  • Restricted access, through log in, etc.
  • Credit card handling
  • More than n users (where n is 1,000, or 10,000, or something)
  • Big companies start using their software

For many, the transition from ignore and hope to gut check comes from either experiencing a failure, or hearing about one in a similar system or industry.

SWAG

This is where people start being intentional about identifying risk. They may begin with a list of things that have happened in the past, or a list like the OWASP Top 10. The main difference between this level and the gut check is that the developers are actively thinking about risks and trying to guess how vulnerable the software is.

The risk assessments from this stage are usually not repeatable. The assessments depend very much on the experience of people doing the assessment. Since the process is not well defined, it is very hard to teach new people to do assessments.

Qualitative Risk Assessment

Some questions you might ask to decide how to think about risk in your software include:

  • Is the software used by just you or by others?
  • Are the others technical people or users?
  • Is the software running in a production environment?
  • Does the software handle important information?
  • Does the software deal with money or payments?
  • Do people depend on the software to do their jobs?
  • Do people’s lives or health depend on the software?

At this level of risk assessment, we are beginning with the software and analyzing the ways it interacts with the environment. These questions are often informed by a list of risks like the OWASP Top 10. The team looks more carefully at their own data, usage patterns, and environment. Most of the focus is on what can go wrong.

This approach is more repeatable than the previous versions, but still misses an important dimension. Since this stage does not focus on the likelihood of failure, it can result in too much attention paid to unlikely risks or too little attention being paid to very important risks. A couple of unlucky breaks can derail an team’s risk management strategy for quite a while.

Quantitative Risk Assessment

People who are serious about risk management suggest that the least you should do for risk assessment is to identify every risk you can and score them on two dimensions: likelihood of occurrence and impact of failure. These dimensions give the team a framework to use to determine which risks to focus on. If a risk is extremely likely to happen and would result in the company failing, it’s obviously more important that an extremely unlikely event that might result in the ability to display innocuous data.

Different systems exist for rating the dimensions. For likelihood, you could use s fuzzy categorization such as: Rare, Unlikely, Normal, Likely, and Very Likely. Or, you could calculate the odds of the event occurring, resulting in an actual number. Likewise you could list impacts in different categories: Data Loss, Money Loss, Reputation Loss, Attack on User’s Systems, etc. You could even measure impact in terms of lost revenue or fines.

Conclusion

Whichever method you use, assessing risk in a critical part of managing risk.