After discussing the Single Responsibility Principle I'd like to talk about the Open-Closed Principle. Let's start with a definition...

Definition

First touted by Bertrand Meyer PhD in the late 80s, the Open-Closed principle (OCP) states:

Software entities should be open for extension, but closed for modification.

"Open for extension" means the software module can have its behaviour extended and "closed for modification" means that these enhancements don't change existing code. How is that possible? Meyer's original notion, aimed at C++ development where multiple implementation inheritance is allowed, was to use derived classes and inheritance. Given the popularity of languages like Java that don't support multiple (implementation) inheritance the recognised practice, as espoused by Robert Martin, to adhere to OCP was redefined to use immutable abstract base classes or interfaces, and polymorphic concrete classes.

Sometimes such an approach is referred to as a "pluggable architecture".

Rationale

Why would we want to do this? Because change is inevitable so we should plan for it. Let me explain...

Software systems are often broken down into small components. This divide-and-conquer approach allows developers to focus on writing a number of discrete pieces of code that can be assembled into a working system. By focusing on smaller chunks of functionality we can keep our focus narrow and re-usability high. However, given the investment we make in developing such components, and the realistic expectation that changes in requirements are inevitable, it is prudent to design system components in such a way as to minimise the system impact of changes to these components in the face of new or altered requirements. This is the ultimate goal of OCP - being able to add new code without screwing up what we know already works.

The benefits of making code OCP-compliant are several:

  • the component can be extended as requirements change, yet the impact of such changes are more predictable.
  • maintainability is easier
  • previous testing routines/code can be re-used
  • the risk associated with system changes is lower

If enhancements or changes are required at a later time, it only requires the developer to add new code, not change exisiting code.  This means that maintaing software in the future should be considerably easier as you don't have to worry as much about breaking existing code! That drastically reduces your risk profile for code changes, and makes it more predictable in terms of system impact.

Consider the following example: you need to develop an analytics library for Front Office applications and quickly bang out some code like this...

using System;

namespace OptionPricing.BadCodeExample
{
    public enum OptionType
    {
        Vanilla,
        Currency,
        Lookback,
        Binary,
        Barrier,
        ForwardStart,
    }

    public class ExoticOptionsPricingCalculator
    {
        // other functionality elided...

        public static double GetPrice(IOption option)
        {
            switch (option.type)
            {
                case(OptionType.Vanilla):
                    return BlackScholesPrice(option);

                case(OptionType.Currency):
                    return GeneralizedBlackScholes(option);

                case(OptionType.Barrier):
                    return StandardBarrierPrice(option);

                case(OptionType.Binary):
                    return ReinerRubinsteinPrice(option);

                case(OptionType.ForwardStart):
                    return RubinsteinPrice(option);

                case(OptionType.Lookback):
                    return FloatingStrikeLookbackPrice(option);

                default:
                    throw new Exception("Don't know how to price this type of option");

            }
        }
    }
}

As you can see we have a large switch statement around the optionType to pick a pricing algorithm depending on the option type. The above code doesn't conform to the OCP. The problem is that when new pricing methods and options types come along, which they inevitably will, we need to bust open this method and make changes. We'd rather isolate our changes to just new functionality being added.

So with that in mind, we refactor the code to use something like this:

Here we use a common interface (or an abstract base class if you prefer that) to hold common details of each option type. We then use a number of concrete classes to model case-specific behaviour. In this case that means pricing algorithms that our Analytics engine uses which now becomes considerably cleaner...

namespace OptionPricing
{
    public class AnalyticsEngine
    {
        // Method to find out the price of the option specified
        public static double GetPrice(IOption option)
        {
            return option.CalculatePrice();
        }
    }
}

What this subtle refactoring has done is make your analytics engine OCP-compliant with respect to new option types. When new exotic option types, like a Cliquet/Rachet option, come along that you need to price, you will not need to change the GetPrice() method in the AnalyticsEngine. Instead all your changes will be contained inside a new class. By consolidating code in this fashion, it means you don't have to keep massive switch statements around the codebase and more importantly you don't have to manually track them all down when a new option variant enters the picture. Furthermore, you'll be safe in the knowledge that you aren't causing unnecessary side effects because your changes are isolated to the new functionality being introduced. That is the essence of the OCP.

Other Parts in the Series
Design Guidelines Part.1: Single Responsibility
Design Guidelines Part.3: The Liskov Substitution Principle

Bookmark and Share