Uncategorized

Making it Easy for Phone Surfers

Usability experts often espouse many cardinal rules such as:

  • People hate filling in forms.
  • People don't read long bits of text you put on the screen or in an email.
  • People don't like to give you all their personal details just to try your service.
  • People don't like to download something just to be able to try your service.
  • People don't like too many choices.
  • People prefer a pricing structure that is easy to understand.

They say that all of the above are barriers to entry and sources of user confusion that you need to eliminate. Perhaps these are true but you might want to get some data before making bet bets on them.


But some things are just damn obvious:

Data entry of URLs on a regular phone keyboard is a major pain.

For English, you have to press a key up to 4 times to get the letter you want, and rather than place the more popular characters in position 1 on each key they went with a simple alphabetical listing so you need multiple key presses more often than not (see Text Message Outrage problem to solve this). The damn keys are so small you can easily hit 3 different ones with "fat thumb syndrome". As well, URLs always contain some non-alphanumeric codes. "http://www.necessaryandsufficient.net" has a colon, 2 periods, and 2 forward slashes. Granted, advanced users will know that they can omit the "http://www." prefix assuming the website operator knows what they are doing with DNS, but it's still likely that there will be a forward slash in the URL which almost always requires shifting to an alternate character display.

These aren't new problem so companies have tried to address the problem with several innovations but many of them fall short:.

Predictive text and auto-complete are based on language dictionaries which don't contain trade names so unfortunately that isn't much help to you.

Voice recognition is a promising technology but it hasn't quite got their yet,

On-screen digital keyboards like that found on the iPhone and HTC Magic have error-correction algorithms so you don't ever hit 2 or more keys at once, but that still doesn't help with a lengthy URLs.

But there are 2 innovations that are really useful:

URL shortening services like tinyurl and bit.ly help website operators transform longer URLs into much shorter URLs. Mobile browsers will certainly appreciate that, and it uses fewer precious characters if you use an SMS-based distribution service like Twitter. Here's my bit.ly URL:

http://bit.ly/HEX8I

The other nifty innovation is QR codes. I've seen these at conferences, on billboards, in department stores, in magazines and on price tags. Essentially they are 2-dimensional bar codes that encode some information like a URL. The idea is, the mobile surfer uses the camera on their phone to take a picture of the bar code. Some software on the phone converts the image to the textual string that represents the URL and pops open the phones web browser at the specified URL. So all the user had to do was to take a photo - no data entry needed at all!

If you have an Android phone, like I do, grab the zxing application from the Appstore, fire it up, and put the cross-hairs on the bar code above. Within seconds it should acquire and give you the decoded text with some options. The iPhone has similar apps but the camera quality on the iPhone isn't great so you'll need to be closer to the qr-code.

QR-codes can contain 4000 odd alphanumeric characters which is more than enough for URLs, and they are royalty-free even though they are patented by the Japanese corporation, Denso Wave. The magic behind it involves Reed-Solomon error correction which is the same technology used by your DVD player. [That reminds me of a crypto scheme I had to implement a while ago called Shamir Shared Secret but more on that another time].

So the message is clear. If you want to be kind to your mobile users give them qr-codes to photograph
or at the very least give them a shortened URL to minimise data entry and maximise their ability the remember it!

Uncategorized

Design Guidelines Part.3: The Liskov Substitution Principle

Definition
LSP was defined way back in 1988 by Dr. Barbara Liskov, who incidently won the 2009 Turing Award, perhaps the most prestigious award in computer science. Her original definition was:

“If for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is substituted for o2 then S is a subtype of T.”

But Robert Martin offered a much more terse definition:

Subtypes must be substitutable for their base types

What this is saying is that a derived class should honor the contracts made by it's parent classes. In other words, if a method signature accepts a base class reference, then it should be able to accept an instance of any class derived from that base class without affecting the functioning of the method.

Motivation
Object-oriented programmers will be familiar with the concepts of abstraction and polymorphism. In statically-typed O-O languages, like C++, Java and C#, the key mechanism to achieve polymorphism is inheritance. LSP is a guiding principle that restricts how inheritance is used such that the OCP is not violated.

Technically speaking, the type of polymorphism this principle addresses is inclusion polymorphism. Inclusion polymorphism occurs in languages that allow subtypes and inheritance whereby an instance of a subtype can be manipulated by the same functions that operate on instances of the supertype (parent class type). This implies a reference (or pointer) of the parent class type can also refer to any child object, meaning that the type of the object being referred to must be determined at runtime. Since the type of object cannot be determined until runtime, and virtual methods are defined per type, it implies that a method call may be executed either in the parent or the child class and this dispatch decision cannot be made until runtime.

The Liskov Substitution Principle helps to guarantee inclusion polymorphism, which is a good thing because inclusion polymorphism improves reuse. All code that references the superclass can be reused referencing a subclass.

Why Follow It?
By adopting the LSP, the correctness of a method accepting base class references is guaranteed under certain substitutability conditions. Furthermore, since LSP is actually a special case of the Open-Closed Principle, every time you violate the LSP, you violate the OCP as a result - but not the other way round. It is this relationship between OCP and LSP that makes it easier to spot since developers tend to understand OCP much more readily that they do with LSP.

Say you develop a class hierarchy using inheritance, and you have a method in your base class that accepts a base class reference, and when you pass in a derived class reference you get unexpected results, it's a strong sign that the inheritance chain and the object model is incorrect. You need to remember that a class must fulfill an "is-a" relationship in order for it to be able to inherit from another class. This relationship is about behavior not data! In this sense, LSP is good at exposing faulty abstractions.

LSP is the reason that it is hard to design and create good deep hierarchies of sub classes and the reason to consider using composition over inheritance. (The strategy pattern is a prototypical example of the flexibility of composition over inheritance.)

The whole point of the Liskov Substitution Principle is really to make you think clearly about the expected behavior and expectations of a class before you derive new classes from it.

Obligatory Example
Common examples for violation of LSP are Rectangle::Square, Circle::Ellipse, etc. Rather than reproduce those here, take a look at the examples in Robert Martin's paper.

Design By Contract
The Liskov Substitution Principle is closely related to the design by contract methodology, which provides rules telling us the conditions under which it is acceptable to substitute a derived class for a base class:

  • Preconditions cannot be strengthened in a subclass.
  • Postconditions cannot be weakened in a subclass.

In other words, a sub-type can only have weaker pre-conditions and stronger post-conditions than its base class. Put differently...derived methods should expect no more and provide no less.

Violations
Signs of LSP violations include:

  • A subclass that does not keep all the external observable behavior of it's parent class
  • A subclass modifies, rather than extends, the external observable behavior of it's parent class.
  • A subclass that throws exceptions in an effort to hide certain behavior defined in it's parent class
  • A subclass that overrides a virtual method defined in it's parent class using an empty implementation in order to hide certain behavior defined in it's parent class

Method overriding in derived classes is probably the biggest cause of LSP violations. All method overrides should be done with great impunity as to avoid these violations.

In addition, the principle implies that no new exceptions should be thrown by methods of the subclass, except where those exceptions are themselves subtypes of exceptions thrown by the methods of the superclass. (think: co-variance and contra-variance).

A function using a class hierarchy violating the principle uses a reference to a base class, yet must have knowledge of the subclasses. Such a function violates the open/closed principle because it must be modified whenever a new derivative of the base class is created, and that really sucks because the compiler or your existing unit tests won't find these cases for you - you have to become a UN weapons inspector, remember what things exactly you need to look for, and go hunt them down manually!

Final Advice
In the words of Robert Martin, Agile Principles, Patterns and Practices in C# (P.149):

"A good engineer learns when compromise is more profitable than perfection. However, conformance to LSP should not be surrendered lightly. The guarantee that a subclass will always work where its base classes are used is a powerful way to manage complexity. Once it is forsaken, we must consider each subclass individually."

Other Parts in the Series
Design Guidelines Part.1: Single Responsibility
Design Guidelines Part.2: Open-Closed Principle
References
Robert Martin's Original Paper

Uncategorized

Design Guidelines Part.2: Open-Closed Principle

After discussing the Single Responsibility Principle I'd like to talk about the Open-Closed Principle. Let's start with a definition...

Definition

First touted by Bertrand Meyer PhD in the late 80s, the Open-Closed principle (OCP) states:

Software entities should be open for extension, but closed for modification.

"Open for extension" means the software module can have its behaviour extended and "closed for modification" means that these enhancements don't change existing code. How is that possible? Meyer's original notion, aimed at C++ development where multiple implementation inheritance is allowed, was to use derived classes and inheritance. Given the popularity of languages like Java that don't support multiple (implementation) inheritance the recognised practice, as espoused by Robert Martin, to adhere to OCP was redefined to use immutable abstract base classes or interfaces, and polymorphic concrete classes.

Sometimes such an approach is referred to as a "pluggable architecture".

Rationale

Why would we want to do this? Because change is inevitable so we should plan for it. Let me explain...

Software systems are often broken down into small components. This divide-and-conquer approach allows developers to focus on writing a number of discrete pieces of code that can be assembled into a working system. By focusing on smaller chunks of functionality we can keep our focus narrow and re-usability high. However, given the investment we make in developing such components, and the realistic expectation that changes in requirements are inevitable, it is prudent to design system components in such a way as to minimise the system impact of changes to these components in the face of new or altered requirements. This is the ultimate goal of OCP - being able to add new code without screwing up what we know already works.

The benefits of making code OCP-compliant are several:

  • the component can be extended as requirements change, yet the impact of such changes are more predictable.
  • maintainability is easier
  • previous testing routines/code can be re-used
  • the risk associated with system changes is lower

If enhancements or changes are required at a later time, it only requires the developer to add new code, not change exisiting code.  This means that maintaing software in the future should be considerably easier as you don't have to worry as much about breaking existing code! That drastically reduces your risk profile for code changes, and makes it more predictable in terms of system impact.

Consider the following example: you need to develop an analytics library for Front Office applications and quickly bang out some code like this...

using System;

namespace OptionPricing.BadCodeExample
{
    public enum OptionType
    {
        Vanilla,
        Currency,
        Lookback,
        Binary,
        Barrier,
        ForwardStart,
    }

    public class ExoticOptionsPricingCalculator
    {
        // other functionality elided...

        public static double GetPrice(IOption option)
        {
            switch (option.type)
            {
                case(OptionType.Vanilla):
                    return BlackScholesPrice(option);

                case(OptionType.Currency):
                    return GeneralizedBlackScholes(option);

                case(OptionType.Barrier):
                    return StandardBarrierPrice(option);

                case(OptionType.Binary):
                    return ReinerRubinsteinPrice(option);

                case(OptionType.ForwardStart):
                    return RubinsteinPrice(option);

                case(OptionType.Lookback):
                    return FloatingStrikeLookbackPrice(option);

                default:
                    throw new Exception("Don't know how to price this type of option");

            }
        }
    }
}

As you can see we have a large switch statement around the optionType to pick a pricing algorithm depending on the option type. The above code doesn't conform to the OCP. The problem is that when new pricing methods and options types come along, which they inevitably will, we need to bust open this method and make changes. We'd rather isolate our changes to just new functionality being added.

So with that in mind, we refactor the code to use something like this:

Here we use a common interface (or an abstract base class if you prefer that) to hold common details of each option type. We then use a number of concrete classes to model case-specific behaviour. In this case that means pricing algorithms that our Analytics engine uses which now becomes considerably cleaner...

namespace OptionPricing
{
    public class AnalyticsEngine
    {
        // Method to find out the price of the option specified
        public static double GetPrice(IOption option)
        {
            return option.CalculatePrice();
        }
    }
}

What this subtle refactoring has done is make your analytics engine OCP-compliant with respect to new option types. When new exotic option types, like a Cliquet/Rachet option, come along that you need to price, you will not need to change the GetPrice() method in the AnalyticsEngine. Instead all your changes will be contained inside a new class. By consolidating code in this fashion, it means you don't have to keep massive switch statements around the codebase and more importantly you don't have to manually track them all down when a new option variant enters the picture. Furthermore, you'll be safe in the knowledge that you aren't causing unnecessary side effects because your changes are isolated to the new functionality being introduced. That is the essence of the OCP.

Other Parts in the Series
Design Guidelines Part.1: Single Responsibility
Design Guidelines Part.3: The Liskov Substitution Principle

Next »