Uncategorized

## Sum of an Arithmetic Progression

An arithmetic progression is a sequence of numbers where the difference between 2 successive numbers is constant. An example is: 1,2,3,4,5,6,7,8... This sequence is increasing by one each time. So, the more general definition for the i-th term in an arithmetic progression, where d is the constant difference, and a0 is the first term in the sequence, is given by the formula:

There is a formula to find the sum of the first n terms but you need not remember it since you can derive it on the spot with common sense. Assuming the simple case where d=1, if you pick the first and last element in the sequence and sum them you get n+1. Ignoring these 2 items from the list, if you do the same thing again you get 2+(n-1)=n+1, the same result. And if you do it again, you get the same result again: 3 + (n-2) = n+1. So a pattern emerges - selecting and removing the first and last element in the sequence and summing them gives (n+1) so the sum of all of these must be that number, n+1, multiplied by the number of pairs you can extract from the sequence, which is in fact n/2. Hence the sum of the first n terms in this particular sequence is (n+1)*n/2. A similar process can be applied regardless of the value of d.

Another way to visualise the result is to write down the sequence of numbers, then just below it write down that same sequence but in reverse order. Now sum each vertical pair and you'll see the same pattern: n/2 lots of (n+1).

It's amazing how many mathematical puzzles and technical interview questions reduce to this simple summation! Don't believe me - well here's a sample of puzzles/questions that are all easily solved if you know the sum of an arithmetic progression:

1. In a sequential search, what is the average number of comparisons it takes to search through n elements?
2. Given 100 boxes each of which, except one, contains 100 x 10g weights, whilst the remaining box has 100 x 9g weights in it. You need to identify the box that is lighter than the others but you can only use the scales once. [Thanks to Phil Lancaster, a work colleague of mine for throwing this devious puzzle at me one day.]
3. Prove that the worst case run time for a bubble sort is O(n2)
4. You have an array of size n-1 which is suppose to contain all the integers between 1 and n, without duplicates, except there is a missing number. You need to find the missing number using a constant-space algorithm
5. You have an array of size n-2 which is suppose to contain all the integers between 1 and n, without duplicates, except two numbers are missing. You need to find the missing numbers using a constant-space algorithm

1. If we assume that the answer to our search could be in any position from 1 to n, all with equal likelihood (i.e. a Uniform distribution, aka the distribution of maximum ignorance), then the average number of comparisons is equal to the sum of all possibilities divided by n. That is, the sum of the arithmetic progression above divided by n, which is (n+1)/2

2. Enumerative combinatorics tells us that we'd need, at most, log2(n) weightings to identify which box has the lighter blocks - divide all remaining boxes into 2 halves, weigh one half of the boxes as a total unit, pick the lighter half, and repeat the process - but we are allowed only one! So consider redistributing the weights as follows: take all the weights out marking them with the box number they came from. Then, using one empty box, put 1 block from box1, 2 blocks from box2, 3 blocks from box3, ..., n-1 blocks from box(n-1) and n blocks from box(n). From our knowledge of the sum of an arithmetic progression we know what the total weight of the box, if all the blocks were 10g, would be 100/2*(100+1)*10g = 5050g. Now, box(j) contributes j blocks each of which is 10 grams for a total contribution of 10.j grams. If box(j) is the lighter box then it will only contribute 9.j grams, a difference of exactly j grams. Blocks from all other box will not deviate from there expected weights since only 1 box originally had lighter blocks. Given this, it follows that the weight difference between actual weight of the specially prepared box and theoretical total weight using our formula always gives the precise box number of the original light box.

3. The bubble sort uses pairwise comparisons to sort lists. The worst case for the bubble sort is when the list is in strictly descending order when you are trying to put it into ascending order. In this case the algorithm compares the first element to (n-1) other elements, and then compares the second element to (n-2) elements to it's right, and then compares the third element to (n-3) elements to it's right, etc, etc. So in total the number of comparisons is (n-1) + (n-2) + (n-3) + (n-4) + ...+ 2 + 1 = n/2(n-1). Expanding this out to polynomial form it has it's highest power of n being 2, hence complexity theory classes this as O(n2).

4. This seems simple at first - just iterate through the list updating an array to indicate which items you have, but this approach does not meet the constant space limitation imposed in the wording of the question. It would require O(n) space. i.e the space required grows linearly with the size of the list.

But if we allocate some space to store a running total which we update as we iterate through the list, once we reach the end of the list we can compare the running total with the theoretical total that we can calculate using, you guessed it, the sum of an arithmetic progression we can easily identify the missing integer. This approach has constant space complexity.

5. This problem is an extension of the previous one. In the previous case we had a single equation with a single unknown that was easily solved. Now we have a single equation with 2 unknown numbers. If you remember anything about solving simultaneous equations you'll know this is not a good situation. However, once again we can make use of some simple algebra to help out. Instead of summing all the terms we can calculate the product of them. The theoretical value for this, n!, is known assuming n isn't too large, so we can compare the calculated value to the theoretical value and now we have 2 equations which we can use to solve for the 2 unknowns, and we've used only constant space complexity to do it. Viola!

Uncategorized

## Swapping Integers

A Great Book
Recently, I've been ploughing through Hacker's Delight by Dr Henry Warren. Despite the misleading title, and the illusion of a series of tips from an arcane science relevant only to compiler developers, this book is actually extremely useful in shedding light on various "under the covers" techniques that can actually permeate up into high-level coding and be useful to "everyday" coders when faced with performance/optimisation issues. Right from the get go, Chapter 2 has some remarkable insights into bitwise operations that led to several lightbulb moments for me.

A Familiar Problem
For example, Section 2-19 discusses "Exchanging Registers" without using a 3rd register. This is a very common interview question reduced to it's most basic form. The interview question usually goes something like this:

How would you swap the value of 2 integers without using a temporary variable?

The Solution
The solution I learnt at university many years ago was, of course, to resort to clever mathematics... If we are given a and b which are integers, then we can apply the following 3 equations (I'm using apostrophes here merely to indicate that the variables have changed):

a' = a + b   (1)
b' = a' - b   (2)
a'' =  a' - b'  (3)

After (1), a holds the sum of the 2 integers, but we still know what b is so we can set b to a to effect the first half of the swap by employing (2). To see this just do the substitution: b' = (a + b) - b = a.

After this we can go back and set a to b to finish the swap routine by applying (3). To see why this works, again consider the substituton:  a'' = (a + b) - a = b. No temporary variable is needed because setting the sum of the 2 digits to one of the variables is enough to allow us to get around the problem with simple algebra. Here's the code in C#:

```        public static void SwapWithAlgebra(ref int a, ref int b)
{
checked
{
a = a + b;
}
b = a - b;
a = a - b;
}```

Note that we have to use the checked keyword since it's possible that addition of two 32-bit signed integers produces an integer which cannot be stored in a 32-bit signed integer and we'd prefer an Overflow exception be thrown rather than silent supression - which is the default behaviour for overflow exceptions in C# ! From a mathematical perspective we know that addition and subtraction are closed over the set of integers but in most programming languages this is not the case since a fixed number of bits are allocated to "integers". The inclusion of the checked keyword makes the code safer but slower. It also means we have to catch and handle any thrown exceptions which is a good reason to consider alternatives...

Dr Warren's Solution
As well as solutions based on addition and subtraction, Dr Warren gives a slightly different approach using bitwise exclusive or (XOR):

a' = a XOR b     (1)
b' = a' XOR b    (2)
a'' =  a' XOR b'  (3)

To understand how this works first note that:

1. XOR will produce a 1 only if exactly one of it's operands is 1, and the other is zero; and
2. XOR is commutative so  a XOR b = b XOR a; and
3. XOR is associative so (a XOR b) XOR c = a XOR (b XOR c); and
4. a XOR a = 0 (this should be obvious from the definition in [1] above)

After (1), the binary representation of a will have 1-bits only in the bit positions where a and b have opposing bits. That is either (ak=1, bk=0) or (ak=0, bk=1). Now when we do the substitution in (2) we get:

b' = (a XOR b) XOR b
= a XOR (b XOR b)     because XOR is associative
= a XOR 0                   because of [4] above
= a                              due to definition of XOR (see [1] above)

Now we can substitute into (3):

a'' =  (a XOR b) XOR a
=  (b XOR a) XOR a   because XOR is commutative
=  b XOR (a XOR a)   because XOR is associative
=  b XOR 0               because of [4] above
=  b                          due to definition of XOR (see [1] above)

Here's the code in c#:

```        public static void SwapWithXOR(ref int a, ref int b)
{
a ^= b;
b ^= a;
a ^= b;
}```

Measuring the Results
As always, when dealing with performance options it's imperative we collect actual data via measurement and base our recommendations on that. Accordingly, here's a simple test program that performs 1 million iterations of several solutions to this puzzle:

```using System;
using System.Diagnostics;

namespace Algorithms
{
static class SwapInts
{

public static void SwapWithXOR(ref int a, ref int b)
{
a ^= b;
b ^= a;
a ^= b;
}

public static void SwapWithAlgebra(ref int a, ref int b)
{
checked
{
a = a+b;
b = a-b;
a = a-b;
}
}

public static void SwapWithTemp(ref int a, ref int b)
{
int t = a;
a = b;
b = t;
}

unsafe public static void SwapWithXORUnsafe(int* a, int* b)
{
*a ^= *b;
*b ^= *a;
*a ^= *b;
}

unsafe public static void SwapWithTempUnsafe(int* a, int* b)
{
int t = *a;
*a = *b;
*b = t;
}

static void Main()
{

Stopwatch sw = new Stopwatch();
const int LoopMax = 1000000;

int x = 455;
int y = 1123;

sw = new Stopwatch();
sw.Start();
for (int l = 0; l < LoopMax; l++)
SwapWithXOR(ref x, ref y);
Console.WriteLine(x);
Console.WriteLine(y);
Console.WriteLine(String.Format("Timing for XOR-based swap bit count: {0}", sw.ElapsedTicks));

sw = new Stopwatch();
sw.Start();
for (int l = 0; l < LoopMax; l++)
SwapWithTemp(ref x, ref y);
Console.WriteLine(x);
Console.WriteLine(y);
Console.WriteLine(String.Format("Timing for TempVar-based swap bit count: {0}", sw.ElapsedTicks));

sw = new Stopwatch();
sw.Start();
for (int l = 0; l < LoopMax; l++)
SwapWithTemp(ref x, ref y);
Console.WriteLine(x);
Console.WriteLine(y);
Console.WriteLine(String.Format("Timing for Algebra-based swap bit count: {0}", sw.ElapsedTicks));

sw = new Stopwatch();
sw.Start();
for (int l = 0; l < LoopMax; l++)
{
unsafe
{
SwapWithXORUnsafe(&x, &y);
}
}
Console.WriteLine(x);
Console.WriteLine(y);
Console.WriteLine(String.Format("Timing for XOR unsafe-based swap bit count: {0}", sw.ElapsedTicks));

sw = new Stopwatch();
sw.Start();
for (int l = 0; l < LoopMax; l++)
{
unsafe
{
SwapWithTempUnsafe(&x, &y);
}
}
Console.WriteLine(x);
Console.WriteLine(y);
Console.WriteLine(String.Format("Timing for TempVar unsafe-based swap bit count: {0}", sw.ElapsedTicks));

}
}
}```

From this we get the following results:

Timing for XOR-based swap bit count:                   50,877
Timing for TempVar-based swap bit count: 36,061
Timing for Algebra-based swap bit count:              36,318
Timing for XOR unsafe-based swap bit count:        63,235
Timing for TempVar unsafe-based swap bit count: 37,904

Conclusion
If you get this question at an interview you can offer 2 alternatives to the problem - one based on addition and subtraction, and one based on XOR. However, it's wise to point out that the performance of the version that uses a temporary variable is better than the XOR version and here's why: the temporary variable solution uses 3 assignment operations, whereas the XOR solution uses 3 assignment operations and 3 bitwise XOR operations hence the temp. variable approach should be quicker. Interestingly, from my results it appears that the addition/substitution version is roughly as fast as the temporary version but it has the slight inconvenience of not working in all cases due to overflow exceptions. Given that my recommendation would be to use the temporary variable approach if running time is more important than space complexity, otherwise use the XOR version which is more memory efficient that the version using the temporary variable, and doesn't suffer from the overflow exceptions.