Thursday 22 October 2009

Parallel Extensions in .NET 4.00 Part 1

Last night I attended an NxtGen User Group in Manchester at which Microsoft guru Mike Taulty was giving an introduction to the new parallel programming components of the .NET framework.

Whether we like it or not we, as developers, are going to have to adapt to using techniques that will run code on separate processors. The new chips from Intel and AMD are no longer continuing to increasing in Ghz, they are just containing more processors. So, we need to be able to utilise this power.

The good news is that the .NET CLR and certain components, such as WCF and WF, already spread their workload across multiple processors. But the bad news is that our own programs do not.

So, how would you currently handle this? The answer is by using a Threads or a ThreadPool. But as Mike demonstrated this is quite long winded and makes the code quite difficult to read.

The idea behind the Parallel Extensions in .NET 4.00 is to abstract the concept of the work that can be carried out on multiple processors away from the underlying concept of threads. They have split the components up into the following three categories:
  1. Co-ordination Data Structures.
  2. Task Parallel Library
  3. Parallel LINQ (PLINQ)
Lets start with the Co-ordination Data Structures. This category contains all the replacement data structures that are required to support parallel programming. If you have ever performed any thread safe work that involves a collection, you will know that none of the collections in .NET are thread safe. The only solution is to lock the entire collection when carrying out any work. Which frankly is a bit rubbish.

The co-ordination data structures contain a selection of Concurrent Collections to get around this issue. These include ConcurrentLinkedList and ConcurrentQueue. It also includes some advanced Co-ordination helpers that include a Barrier (blocks work until a x number of threads reach) and a CountdownEvent (a concurrent countdown that can be signalled from multiple threads).

The last of the co-ordination data structures is the Data Synchronization classes. The main one of interest is the Lazy class. Great name, but what does it do? It allows you to create a thread safe Singleton class. For example, say we have a singleton class called MyClass which has a method called MyMethod(). Initialisation of the singleton can be complex as we may have several threads entering the initialisation code. The solution is to use the Lazy class to wrap the class like so:

Lazy myInstance = new Lazy<MyClass>();
myInstance.Value.MyMethod();

This ensures that ONLY one instance of this class will exist.

In the next part I will cover the Task Parallel Library.

No comments: