Archive for June, 2009

Uncategorized

Concurrent Programming: The Basics

From number of years, microprocessor designers have been focusing on increasing core count rather than individual clock rate, although we are still seeing Moore's Law. As multi-core processors become more prevalent, programs need to be multi-threaded to take advantage of the parallel processing capabilities on offer. But concurrent programming is hard. Damn hard. One solution to this is to use functional languages like Erlang, Haskell or F# to make concurrent programming easier for the developer since they often employ immutable "variables" which makes reasoning about state management significantly easier (read: there are no side effects). However old-school folks who have to use their current language (C++/Java/C#) can use threads in order to achieve concurrency, as long as they also employ various constructs/patterns to manage synchronisation of threads.

It might seem obvious but you need to remember that concurrency is for performance, and there are generally 3 ways in which concurrent execution can improve performance:

  1. to reduce latency (making the operation execute faster);
  2. to hide latency (avoiding blocking other operations whilst slow operations execute); or
  3. to increase throughput (making the system perform more work per unit time).

Reducing Latency
Reducing latency requires that the unit of work under investigation be "parallelizable". That is, you need to find an algorithm that lets you divide the work into chunks that can execute in parallel, and also a way of putting the chunks back together to complete the work. As well, when you divide the work up into smaller chunks you need to ensure that the cost of communication between the processing sub-components, and re-assembly of any interim processing doesn't outweigh the benefit of having parallel operations. The same problems are faced by proponents of grid/cloud computing operations.

An example of using concurrency to reduce latency is a database engine servicing a query. At a certain point it knows what data pages it needs and it know which of these are already in memory and which are not - in which case we need to fetch them from disk. The database engine can spawn multiple threads to read different data pages off disk into memory. This parallel loading reduces the time taken by the query.

Obviously, the degree to which one can use concurrency to reduce latency is dependent upon how "parallelizable" the problem is.

Hiding Latency
If the operation is long-running and not amenable to parallelization then concurrent execution can instead be used to go ahead with other tasks while the long-running task is done on another thread. Here we aren't reducing the latency of the task we are just hiding it.

This sort of concurrency is often done in UI development where we wish to improve the perceived responsiveness of the UI so we can employ asynchronous background threads to perform certain operations.

Increasing Throughput
Here, instead of using parallel operations to make a single operation faster (case 1 above) we are employing multiple concurrent operations to get more operations processed per unit time. For example say we have a service/daemon running on a server that watches the file system and when it sees an mp3 file in a certain directory it fetches it and does some transcoding operations. If the server is a multi-core machine it's beneficial to have to daemon use a pool of X threads where each thread is given a single file to process. This increases the throughput of the daemon by virtue of concurrent operations.

In a future post I'll dive into details of the some locking constructs and also some of the patterns that make concurrent programming easier.

Uncategorized

Netflix: Open Collaboration is Recommended

Judging by the leaderboard the Netflix grand prize is in sights for a select-few researchers including Pragmatic Theory. The target RSME is 0.8563 and the best entry as of the time this post is being written is 0.8582.

If you are at all interested in machine learning, AI or operations research, you would of heard about the Netflix competition that's been ongoing for over 3 years. If not, be advised that online movie-rental company, Netflix, have been running an open competition with 1 million USD up for grabs for anyone who can invent a collaborative filtering algorithm for movie ratings that beats their in-house algorithm (Cinematch) by 10% by 2011.

Because it's in their best interest to assist researchers, Netflix has a training data set of 100 million recommendations stripped of all PII. At the time the competition started way back in 2006, they actually had 103 million recommendations so the 3 million they didn't include in the training set are what is being used to evaluate any submitted recommendation systems. In the world of machine learning the golden rule is - the more data you have the better!

Netflix will clearly benefit from such an improvement when one is found - which looks to be real soon! - since it directly translates to increased movie rental revenue and lower subscription cancellation rates. Since they charge a fixed monthly subscription fee they know that users who don't rent enough movies per period will realise that they are not getting value for money and will cancel their subscription. Hence the goal for Netflix is to be able to figure out what the customer likes and have a ready supply of recommendations so they user never runs out of movies they want to see. However we need to be cognisant of the fact that in the online world, where inventory holding costs are negligible due to digital storage movie-retailers like Netflix can carry a significantly greater inventory of movies than traditional bricks-and-mortal rental outlets. Thus the poor user is faced with the paradox of choice making good recommendations even more important to their business model.

It's been an incredibly shrewd move by Netflix as it is a cost effective way for them to harness the resources of the (interested) research community for a fixed budget with a fixed timeframe. So in effect all 3 pillars of the infamous project-triangle are fixed! That's a project manager's dream. If they tried to do the R&D in-house they'd be unlikely to attract a team of individuals that can outperform the "open community", and in all likelihood it would take them longer and cost them more than 1 million USD to advance the science to the level they want to. In essence, the are employing the wisdom of crowds to good effect.

What's interesting is the type of folks who have done well in the competition, and the degree of collaboration between participants. As expected there is a decent smattering of professional research labs and academic mathematics departments near the top of the leaderboard, but there are also lone researchers and participants which aren't recognized experts in the field. One classic example is Gavin Potter, going under the guise of "Just a guy in a garage" got massive exposure from this article in Wired magazine for applying more non-mathematical notions in his approach which did fairly well for a while. An even with 1 million dollars up from grabs many of the teams entered openly share their approaches and experiences with others. If ever you wanted an example of how collaboration and the "wisdom of crowds" can advance our knowledge than, other than Wikipedia, this is it. You have to think that for the same reasons, open source software must, if it hasn't already, eventually overtake proprietary software systems if enough people contribute.

Uncategorized

Startups: The Journey is the Reward

This is one of the most accurate summaries of start-up environments that I've seen. You are in a smaller organisation, small enough that your good work will get noticed and your ideas can quickly have a big impact, and unlike a larger organisation where people are specialists, start-up technical personnel need to be multi-skilled. If you are an individual with an insatiable desire to learn things and push your own envelope such an environment is extremely beneficial because technical staff are faced with a stimulating variety of issues and, by necessity, a fast-growing company with limited cash will often make room for your ideas and ambitions so long as the task gets done.

If you've never worked at a start-up have a read of The Siren Song of Startups.

Next »