Lifted Conversions and Operators in C#

Lifted Conversions
These apply to nullable value types that were introduced in .NET 2. Prior to this reference types were obviously nullable but value types, like int, long, double, decimal, etc, were NOT nullable making it hard to figure out if a value had been set or not without resorting to initializing the value types with some "magic value" like -1. The language designers for .NET implemented the Nullable construct as a struct that wrapped a base value type, T, and conveniently there is a generic constraint to restrict T to be a value type [ where T: struct]. Now, rather than re-implement all the implicit conversions between value types for nullable value types they adopted a policy of lifted conversions from the underlying base value types. This is best illustrated with some some code doing implicit conversions:

    1 int i = 1;

    2 long j = 1;


    4 int? ni = 1;

    5 long? nl = 1;


    7 j = i;      // fine

    8 nl = ni;    // fine


   10 i = j;      // Compile error - downcasts need to be explicit

   11 ni = nl;    // Compile error - downcasts need to be explicit


   13 ni = i;     // fine

   14 ni = j;     // Compile error - no implicit cast from long to int?

Line 10 indicates the expected case where a downcast from a long to an int must be explicit thus this line causes a compilation error. Line 11 is attempting to do an implicit cast from a long? to an int? and because of the lack of an implicit cast from a long to an int, it too causes a compilation error. This is in stark contrast to Lines 7 and 8 where the opposite cast is being attempted, int to long and int? to long?. Line 7 works as expected because it is a widening conversion, and Line 8 works because a lifted conversion (from int to long) is being used.

Line 13 works fine too because a non-nullable value type has a narrower range of values than it's corresponding nullable value type, so a cast from T? to T is considered safe and can be done implicitly.

Note that lifted conversions are applicable not just to the predefined value types but also to any implicit conversions defined on user-defined value-types (structs). These can be defined using code such as this...

public struct Rotation


    public int Degrees { get; set; }


    // to implicitly cast an int to a Rotation

    public static implicit operator Rotation(int value)


        return new Rotation { Degrees = value % 360 };



Lifted Operators
Along similar lines as the lifted conversions on nullable value types, lifted operators allow predefined and user-defined operators that work for non-nullable value types also to work for nullable forms of those types. In other words, operators defined on T are available to T? where T is a value type (struct).

I've often heard programmers allude to this as a form of inheritance but that isn't correct. Both user-defined and predefined operators and conversions are defined as static methods (see below) and as such are not inherited. It's because of this that the concept of lifted operators is employed. Consider this simple value type with the addition operator overloaded:

public struct Rotation


    public int Degrees { get; set; }


    public static Rotation operator +(Rotation a, Rotation b)


        return new Rotation {Degrees = (a.Degrees + b.Degrees)%360};



When someone comes along after you and creates a Rotation? your overload of the + operator effectively becomes this:

    // lifted operator for Rotation? is equivalent to...

    public static Rotation? operator +(Rotation? a, Rotation? b)


        if (a == null || b == null )

            return null;


        return new Rotation?


                        Degrees = (a.Value.Degrees + b.Value.Degrees) % 360



Effectively it does what you'd expect with the bonus null propagation logic added.

There are more subtleties involved in this (like the qualifying criteria for operators to be lifted, and what the type of the return value, if any, is for the lifted operator), so best consult Section 24.3.1. of the C# Language Specification. Note that Eric Lippert has admitted that the specification is a bit inconsistent with it's use of the term "lifted".


Wierd C# Edge Cases

Here's a collection of C# quirks that aren't actually quirks just implementation side effects of the compiler and the specification.

Type Inference and the Conditional Operator

The conditional operator (?:) returns one of two values depending on the value of the Boolean expression. Sounds straight forward enough but try and figure out why a compile error is thrown on line 4.

   1:              int i = 9;
   2:              int j = 8;
   3:              int max = i > j ? i : j;                // works fine 
   4:              int? nullableMax = i > j ? i : null;    // compiler error

The answer lies in Section 7.13 of the C# 3.0 specification which states on page 200-201:
The second and third operands of the ?: operator control the type of the conditional expression. Let X and Y be the types of the second and third operands. Then,
• If X and Y are the same type, then this is the type of the conditional expression.
• Otherwise, if an implicit conversion (§6.1) exists from X to Y, but not from Y to X, then Y is the type of the conditional expression.
• Otherwise, if an implicit conversion (§6.1) exists from Y to X, but not from X to Y, then X is the type of the conditional expression.
• Otherwise, no expression type can be determined, and a compile-time error occurs.

In other words, the compiler sees an integer, i, and a null, and since there's no conversion between the two it throws a "no implicit conversion" error.

By default, integer overflow is silent for runtime-evaluated items, but active for compile-time constants

This catches many programmers out until it has bitten them. I'd prefer to see it ON by default. If the programmer knows exactly what they are doing and wants to suppress it they can go and turn overflow off.

int a = Int32.MinValue;
a = a - 1999;                   // no exception
int b = Int32.MinValue - 1999;  // compile-time error

Yes, you can go in to the Advanced Build Settings in Visual Studio and force the use of the /checked+ command-line switch, but this will only happen after the programmer has been bitten by this issue. A lesser evil is to let the programmer suffer the performance overhead of, perhaps unwanted, overflow checking rather than unusual program behaviour.

Arithmetic operators are not defined for bytes and shorts

In the code shown below we are trying to add two shorts, but since the addition operator (+) is not defined for shorts, the operands are promoted to 32-bit integers and then the addition takes place. This upcast is implicit since it is a widening conversion however the addition operation produces an System.Int32 result which now needs to be explicitly downcast to a short because it is not a widening conversion. That leaves many programmers scratching their head!

short x = 1;
short y = 1;
short z = x + y;            // compile-time error
short zz = (short)(x + y);  // no error

Division By Zero on Floating Points

Take a look at the code below. Will a DivisionByZero exception by thrown on line 3?

   1:              double i = 7.5;
   2:              double j = 0;
   3:              Console.WriteLine( i/j );   // will this throw at run-time?

The answer is NO is won't. If the data types were integral it certainly would, by dividing a floating point number by zero results in infinity and no exception!

Bankers Rounding By Default

This is truly bizarre. The Math.Round() function in .NET doesn't really follow the Principle of Least Astonishment. Check this out...

        static void Main()
            Console.WriteLine(Math.Round(-2.5));    // -2
            Console.WriteLine(Math.Round(-1.5));    // -2
            Console.WriteLine(Math.Round(-0.5));    // 0
            Console.WriteLine(Math.Round(0.0));     // 0
            Console.WriteLine(Math.Round(0.5));     // 0
            Console.WriteLine(Math.Round(1.5));     // 2
            Console.WriteLine(Math.Round(2.5));     // 2

Math.Round() uses Bankers' Rounding which is not what most people would expect. To get what you'd think it would do by default you need to do this:

Math.Round(0.5, MidpointRounding.AwayFromZero )

Method Overload Resolution

I originally came across this on Jon Skeet's website, who apparently got it from Ayende. Try and figure out which method is called and what is printed to the console...

   1:          static void Main()
   2:          {
   3:              Foo("Hello");
   4:          }
   6:          static void Foo(object x)
   7:          {
   8:              Console.WriteLine("object");
   9:          }
  11:          static void Foo<T>(params T[] x)
  12:          {
  13:              Console.WriteLine("params T[]");
  14:          }

It hits the generic Foo(params T[] x) function but why? According to Section 7.4.3 of the C# 3.0 Specification overload resolution works in 2 parts: the first part identifies the set of applicable function members and the second part finds the "better function" match. In this case there are 2 applicable functions with slightly different parameter lists: object x, and params string[] x. Because the second candidate has a parameter list the compiler considers both the "normal form" and the "expanded form". To quote the spec...

"For a function member that includes a parameter array, if the function member is applicable ..., it is said to be applicable in its normal form. If a function member that includes a parameter array is not applicable in its normal form, the function member may instead be applicable in its expanded form. The expanded form is constructed by replacing the parameter array in the function member declaration with zero or more value parameters of the element type of the parameter array such that the number of arguments in the argument list A matches the total number of parameters "

Section of the specification defines the logic for the "better function" and to cut a long story short, it picks (string x) - the expanded form of the parameter array argument list - to be a better argument list match than (object x).

More Method Overloading Intrigue

Look a the code below (shamelessly stolen from Jon Skeet's website) and try and figure out what is printed to the console.

   1:      class Base
   2:      {
   3:          public virtual void DoSomething(int x)
   4:          {
   5:              Console.WriteLine("Base.DoSomething(int)");
   6:          }
   7:      }
   9:      class Derived : Base
  10:      {
  11:          public override void DoSomething(int x)
  12:          {
  13:              Console.WriteLine("Derived.DoSomething(int)");
  14:          }
  16:          public void DoSomething(object o)
  17:          {
  18:              Console.WriteLine("Derived.DoSomething(object)");
  19:          }
  20:      }
  22:      class Test
  23:      {
  24:          static void Main()
  25:          {
  26:              Derived d = new Derived();
  27:              int i = 10;
  28:              d.DoSomething(i);
  29:          }
  30:      }

Jon Skeet explained it best..."Derived.DoSomething(object) is printed - when choosing an overload, if there are any compatible methods declared in a derived class, all signatures declared in the base class are ignored - even if they're overridden in the same derived class!"

Nullable Type Boxing

Saw this one on StackOverflow - original creator was Marc Gravell. Try and guess why a NullReference exception is thrown...

   1:          static void Foo<T>() where T : new()
   2:          {
   3:              T t = new T();
   4:              Console.WriteLine(t.ToString()); // works fine
   5:              Console.WriteLine(t.GetHashCode()); // works fine
   6:              Console.WriteLine(t.Equals(t)); // works fine
   8:              // so it looks like an object and smells like an object...
  10:              // but this throws a NullReferenceException...
  11:              Console.WriteLine(t.GetType());
  12:          }
  14:          static void Main()
  15:          {
  16:              Foo<int?>();
  18:          }

Answer: All the methods are overridden, except GetType() which can't be; so it is cast (boxed) to object (and hence to null) to call object.GetType()... which calls on null. Interestingly an empty Nullable boxes to null, not a box that contains an empty Nullable.

.Net, C++/C#, Threading

Thread Safety under .Net

"In general-purpose software engineering practice, we have reached a point where one approach to concurrent programming dominates all others, namely, threads." (The Problem with Threads)

Way back in the .Net 1.1 release, circa 2003, Microsoft gave adventurous developers the ability to create multi-threaded applications. That ability brought with it the need for developers to do thread synchronization, avoidance of race conditions, dealing with deadlocks, and new approaches to debugging.

In an effort to make object synchronization easier Microsoft introduced the lock keyword, which enabled the developer to wrap some critical section of code in a mutual-exclusion. This is a nice piece of syntactic sugar which wraps calls to System.Threading.Monitor.Enter() and Exit() and makes it easier for developers to serialize thread access in a multi-threaded environment. However, as has been reported by numerous other bloggers (IanG on Tap, Dimitri Glazkov, Eric Gunnerson), it does not allow you to easily specify a timeout period which means the waiting thread is blocked indefinitely. This is not at all desirable since it is often better to abort the operation after a predetermined time period because of the possibility of deadlocks (thread #1 holds a lock on resource A and wants to acquire a lock on resource B, and thread #2 holds a lock on resource B and wants to acquire a lock on resource A).

So advance 5 or so years, and 3 iterations of the .Net framework - now at version 3.5 - and we still have no way to pass a timeout period to the lock keyword. You can of course achieve this by writing code shown here: Yet More Timed Locking by IanG

The concensus amongst several posters was to implement the solution as follows:

  • using a custom written struct to offer the new functionality;
  • using a struct instead of a class to avoid a heap allocation;
  • to call Monitor.TryEnter() rather than Monitor.Enter();
  • to use the Monitor class in preference to the ReaderWriteLock since it's cheaper to acquire locks this way;
  • to use the Monitor class in preference to the WaitHandle since it's cheaper to acquire locks this way;
  • to throw a custom exception if the timeout period passes before acquiring the lock;
  • avoid using try/catch logic that suppresses exceptions occurring inside the protected, critical section of code since it's possible to leave the state in an inconsistent state after the lock is released if this is the case;
  • implement IDisposable to ensure release of the acquired lock once the struct goes out of scope -rather than relying on C# destructors since they are actually finalizers run by the GC in a non-deterministic fashion; and
  • ensure the IDisposable.Dispose() method is public to avoid having the compiler boxing the IDisposable interface before calling Dispose().

The net result here (no pun intended) is a fairly decent work around to a language limitation, but why, oh why, after so many years is that language limitation still present? Had Microsoft added an extra, optional timeout parameter to the lock keyword, or implemented an alternate trylock(obj, timeout, delegate) keyword this whole detour could have been avoided resulting in far cleaner code and developers having to make fewer knowledge leaps to get to the "correct" solution to implement object synchronization. That would have been even sweeter syntactic sugar indeed!