Friday, February 26, 2010

On satisfaction

It is a great feeling when an elegant piece of code comes together. Even if it can’t be compiled yet.

  Parallel.ForEach(nodeQueue as IOmniValueEnumerable)
.NumTasks(numTasks)
.CancelWith(cancelToken)
.Execute(
procedure (const elem: TOmniValue)
var
childNode: TNode;
node : TNode;
begin
node := TNode(elem.AsObject);
if node.Value = value then begin
nodeResult := node;
nodeQueue.CompleteAdding;
cancelToken.Signal;
end
else for childNode in node.Children do
nodeQueue.TryAdd(childNode);
end);


It is even a better feeling when a code that seems to be impossible to write, starts to work.



And the best one – that happens when the code is working so well that you are not afraid of releasing it to the public.



Well, make this almost the best. Because there’s something even better – when people call back to tell you that they like using the code and it is helping them to do their work faster and better.



The feeling that cannot be surpassed comes when such happy user says something like: “Thanks for the code, it helps me a lot. There’s an Amazon gift certificate, spend it as you like.”



01



I can only respond with: “Rico, thanks!”. OmniThreadLibrary 1.05 is dedicated to you.

Labels:

Thursday, February 25, 2010

OmniThreadLibrary 1.05

As there were no error reports related to OmniThreadLibrary 1.05 RC, I’ve released final 1.05 version just few moments ago. There are almost no changes between the RC and final release – one demo was added and Parallel.Join code was tweaked a little.

You can download OTL 1.05 from the Google Code. Alternatively, you can update SVN trunk (checkout instructions) or checkout the release-1.05 tag.

Support is available on the web discussion forum.

Big rename

Many internal classes and interfaces was renamed. This should not affect most of the users.

  • TOmniBaseStack –> TOmniBaseBoundedStack
  • TOmniStack –> TOmniBoundedStack
  • TOmniBaseQueue –> TOmniBaseBoundedQueue
  • TOmniQueue –> TOmniBoundedQueue
  • IInterfaceDictionary –> IOmniInterfaceDictionary
  • IInterfaceDictionaryEnumerator -> IOmniInterfaceDictionaryEnumerator,
  • TInterfaceDictionaryPair –> TOmniInterfaceDictionaryPair

I’m sorry for that. Some names are badly chosen and some did not follow the OTL naming conventions.

Dynamic lock-free queue

Implemented dynamically allocated, O(1) enqueue and dequeue, threadsafe,  lock-free queue. Class TOmniBaseQueue contains base implementation while TOmniQueue adds observer support. Both classes live in the OtlContainers unit.

Read more about the TOmniQueue: Dynamic lock-free queue – doing it right.

Inverse semaphore

Implemented resource counter with empty state signalling TOmniResourceCount (unit  OtlSync).

Read more: Three steps to the blocking collection: [1] Inverse semaphore.

Blocking collection

New unit OtlCollection which contains blocking collection implementation  TOmniBlockingCollection.

Read more: Three steps to the blocking collection: [3] Blocking collection

Parallel

New high-level parallelism support (unit OtlParallel). Requires at least Delphi 2009.

Two parallel control structures are supported: for each (with optional aggregator) and join.

The demo for Parallel.ForEach can be found in project 35_ParallelFor. The same code is reprinted near the end of the Three steps to the blocking collection: [3] Blocking collection post.

Parallel.ForEach.Aggregate was described in Parallel.ForEach.Aggreate post and is demoed in project 36_ParallelAggregate.

At the moment ForEach is fairly limited. It can iterate over a range of numbers or over a collection supporting the IOmniValueEnumerable interface (TOmniBlockingCollection, for example). The second limitation will be removed in the future. The plan is to support any collection that implements IEnumerable.

Parallel.Join is very simple code that executes multiple tasks and waits for their completion. It was designed to execute simple tasks that don’t require communication with the owner. It is demoed in project 37_ParallelJoin.

Environment

Unit OtlCommon contains new interface IOmniEnvironment and function Environment that returns singleton of this type. Environment can be used to query some basic information on system, process and thread. Some information (for example process and thread affinity) can also be modified using the same interface.

  IOmniAffinity = interface
property AsString: string;
property Count: integer;
property Mask: DWORD;
end;

IOmniProcessEnvironment = interface
property Affinity: IOmniAffinity;
property Memory: TOmniProcessMemoryCounters;
property PriorityClass: TOmniProcessPriorityClass;
property Times: TOmniProcessTimes;
end;

IOmniSystemEnvironment = interface
property Affinity: IOmniAffinity;
end;

IOmniThreadEnvironment = interface
property Affinity: IOmniAffinity;
property ID: cardinal;
end;

IOmniEnvironment = interface
property Process: IOmniProcessEnvironment;
property System: IOmniSystemEnvironment;
property Thread: IOmniThreadEnvironment;
end;

Newer demos are using some parts of the Environment interface. For example, in demo 33_BlockingCollection, process affinity is set with

  Environment.Process.Affinity.Count := inpNumCPU.Value; 

while the demo 35_ParallelFor uses following code fragment to query process affinity

  numTasks := Environment.Process.Affinity.Count; 

Cancellation token

New interface IOmniCancellationToken is used in the Parallel.ForLoop (see post Three steps to the blocking collection: [3] Blocking collection for the example) and in IOmniTaskControl.TerminateWhen.

IOmniTaskControl and IOmniTask implement CancellationToken: IOmniCancellationToken  property which can be used by the task and task controller.

IOmniCancellationToken is just a simple wrapper around the Win32 event primitive and is defined in the OtlSync unit.

  IOmniCancellationToken = interface
procedure Clear;
function IsSignaled: boolean;
procedure Signal;
property Handle: THandle;
end; { IOmniCancellationToken }

Message dispatcher

IOmniTaskControl now implements message dispatching setter in form OnMessage(msgID, handler). Use it to route specific message IDs to specific functions when global TOmniEventMonitor is not used.

An example from one of my applications:

  spmDatabaseConn := CreateTask(
TSttdbPlaylistDatabaseWorker.Create(),
'Playlist Monitor Database Connection')
.SetParameters([serverAddress, serverPort, username, password])
.SetTimer(15*1000, @TSttdbPlaylistDatabaseWorker.CheckDBVersion)
.OnMessage(MSG_DB_ERROR, HandleError)
.OnMessage(MSG_DB_STATUS, HandleDatabaseStatus)
.OnMessage(MSG_DB_VERSION, HandleDatabaseVersion)
.Run;

UserData[]

Implemented IOmniTaskControl.UserData[]. The application can store any values in this array. It can be accessed via the integer or string index. This storage are can only be access from the task controller side. Access is not thread-safe so you should use it only from one thread or create your own protection mechanism.

Small changes

  • IOmniTask implements Implementor property which points back to the worker instance  (but only if worker is TOmniWorker-based).
  • Refactored and enhanced TOmniValueContainer.
  • TOmniTaskFunction now takes 'const' parameter. 
    TOmniTaskFunction = reference to procedure(const task: IOmniTask).
  • Implemented TOmniValue.IsInteger.

Bugs fixed

  • TOmniEventMonitor.OnTaskUndeliveredMessage was missing 'message' parameter.
  • Set package names and designtime/runtime type in D2009/D2010 packages.

New demos

  • 32_Queue: Stress test for new TOmniBaseQueue and TOmniQueue.
  • 33_BlockingCollection: Stress test for new TOmniBlockingCollection, also demoes  the use of Environment to set process affinity.
  • 34_TreeScan: Parallel tree scan using TOmniBlockingCollection.
  • 35_ParallelFor: Parallel tree scan using Parallel.ForEach (Delphi 2009 and newer).
  • 36_ParallelAggregate: Parallel calculations using Parallel.ForEach.Aggregate  (Delphi 2009 and newer).

Labels: , , , ,

Monday, February 22, 2010

Three steps to the blocking collection: [3] Blocking collection

About two months ago I started working on Delphi clone of .NET 4 BlockingCollection. Initial release was completed just before the end of 2009 and I started to write a series of articles on TOmniBlockingCollection in early January but then I got stuck in the dynamic lock-free queue implementation. Instead of writing articles I spent most of my free time working on that code.

Now it is (finally) time to complete the journey. Everything that had to be said about the infrastructure was told and I only have to show you the internal workings of the blocking collection itself.

[Step 1: Three steps to the blocking collection: [1] Inverse semaphore]

[Step 2: Dynamic lock-free queue – doing it right]

The blocking collecting is exposed as an interface that lives in the OtlCollections unit.

  IOmniBlockingCollection = interface(IGpTraceable) 
['{208EFA15-1F8F-4885-A509-B00191145D38}']
procedure Add(const value: TOmniValue);
procedure CompleteAdding;
function GetEnumerator: IOmniValueEnumerator;
function IsCompleted: boolean;
function Take(var value: TOmniValue): boolean;
function TryAdd(const value: TOmniValue): boolean;
function TryTake(var value: TOmniValue; timeout_ms: cardinal = 0): boolean;
end; { IOmniBlockingCollection }

There’s also a class TOmniBlockingCollection which implements this interface. This class is public and can be used or reused in your code.

The blocking collection works in the following way:

  • Add will add new value to the collection (which is internally implemented as a queue (FIFO, first in, first out)).
  • CompleteAdding tells the collection that all data is in the queue. From now on, calling Add will raise an exception.
  • TryAdd is the same as Add except that it doesn’t raise an exception but returns False if the value can’t be added.
  • IsCompleted returns True after the CompleteAdding has been called.
  • Take reads next value from the collection. If there’s no data in the collection, Take will block until the next value is available. If, however, any other thread calls CompleteAdding while the Take is blocked, Take will unblock and return False.
  • TryTake is the same as Take except that it has a timeout parameter specifying maximum time the call is allowed to wait for the next value.
  • Enumerator calls Take in the MoveNext method and returns that value. Enumerator will therefore block when there is no data in the collection. The usual way to stop the enumerator is to call CompleteAdding which will unblock all pending MoveNext calls and stop enumeration. [For another approach see the example at the end of this article.]

The trivial parts

Most of the blocking collection code is fairly trivial.

Add just calls TryAdd and raises an exception if TryAdd fails.

procedure TOmniBlockingCollection.Add(const value: TOmniValue);
begin
if not TryAdd(value) then
raise ECollectionCompleted.Create('Adding to completed collection');
end; { TOmniBlockingCollection.Add }

CompleteAdding sets two “completed” flags – one boolean flag and one Windows event. Former is used for speed in non-blocking tests while the latter is used when TryTake has to block.

procedure TOmniBlockingCollection.CompleteAdding;
begin
if not obcCompleted then begin
obcCompleted := true;
Win32Check(SetEvent(obcCompletedSignal));
end;
end; { TOmniBlockingCollection.CompleteAdding }

Take calls the TryTake with the INFINITE timeout.

function TOmniBlockingCollection.Take(var value: TOmniValue): boolean;
begin
Result := TryTake(value, INFINITE);
end; { TOmniBlockingCollection.Take }

TryAdd checks if CompleteAdding has been called. If not, the value is stored in the dynamic queue.

There’s a potential problem hiding in the TryAdd – between the time the completed flag is checked and the time the value is enqueued, another thread may call CompleteAdding. Strictly speaking, TryAdd should not succeed in that case. However, I cannot foresee a parallel algorithm where this could cause a problem.

function TOmniBlockingCollection.TryAdd(const value: TOmniValue): boolean;
begin
// CompleteAdding and TryAdd are not synchronised
Result := not obcCompleted;
if Result then
obcCollection.Enqueue(value);
end; { TOmniBlockingCollection.TryAdd }

Easy peasy.

The not so trivial part

And now for something completely different …

TryTake is a whole different beast. It must:

  • retrieve the data
  • observe IsCompleted
  • block when there’s no data and observer is completed
  • observe the timeout limitations

Not so easy.

In addition to the obcCompletedSignal (completed event) and obcCollection (dynamic data queue) it will also use obcObserver (a queue change mechanism used inside the OTL) and obcResourceCount, which is an instance of the TOmniResourceCount (inverse semaphore, introduced in Part 1). All these are created in the constructor:

constructor TOmniBlockingCollection.Create(numProducersConsumers: integer);
begin
inherited Create;
if numProducersConsumers > 0 then
obcResourceCount := TOmniResourceCount.Create(numProducersConsumers);
obcCollection := TOmniQueue.Create;
obcCompletedSignal := CreateEvent(nil, true, false, nil);
obcObserver := CreateContainerWindowsEventObserver;
obcSingleThreaded := (Environment.Process.Affinity.Count = 1);
if obcSingleThreaded then
obcCollection.ContainerSubject.Attach(obcObserver, coiNotifyOnAllInserts);
end; { TOmniBlockingCollection.Create }

TryTake is pretty long so I’ve split it into two parts. Let’s take a look at the non-blocking part first.

First, the code tries to retrieve data from the dynamic queue. If there’s data available, it is returned. End of story.

Otherwise, the completed flag is checked. If CompleteAdding has been called, TryTake returns immediately. It also returns if timeout is 0.

Otherwise, the code prepares for the blocking wait. Resource counter is allocated (reasons for this will be provided later), and observer is attached to the blocking collection. This observer will wake the blocking code when new value is stored in the collection.

[In the code below you can see a small optimization – if the code is running on a single core then the observer is attached in the TOmniBlockingCollection constructor and detached in the destructor. Before this optimization was introduced, Attach and Detach spent much too much time in busy-wait code (on a single-core computer).]

After all that is set, the code waits for the value (see the next code block), observer is detached from the queue and resource counter is released.

function TOmniBlockingCollection.TryTake(var value: TOmniValue;
timeout_ms: cardinal): boolean;
var
awaited : DWORD;
startTime : int64;
waitHandles: array [0..2] of THandle;
begin
if obcCollection.TryDequeue(value) then
Result := true
else if IsCompleted or (timeout_ms = 0) then
Result := false
else begin
if assigned(obcResourceCount) then
obcResourceCount.Allocate;
try
if not obcSingleThreaded then
obcCollection.ContainerSubject.Attach(obcObserver, coiNotifyOnAllInserts);
try
//wait for the value, see the next code block below
finally
if not obcSingleThreaded then
obcCollection.ContainerSubject.Detach(obcObserver, coiNotifyOnAllInserts);
end;
finally
if assigned(obcResourceCount) then
obcResourceCount.Release;
end;
end;
end; { TOmniBlockingCollection.TryTake }

Blocking part starts by storing the current time (millisecond-accurate TimeGetTime is used) and preparing wait handles. Then it enters the loop which repeats until the CompleteAdding has been called or timeout has elapsed (the Elapsed function which I’m not showing here for the sake of simplicty; see the source) or a value was dequeued.

In the loop, the code tries again to dequeue a value from the dynamic queue and exits the loop if dequeue succeeds. Otherwise, a WaitForMultipleObjects is called. This wait waits for one of three conditions:

  • Completed event. If this event is signalled, CompleteAdding has been called and TryTake must exit.
  • Observer event. If this event is signalled, new value was enqueued into the dynamic queue and code must try to dequeue this value.
  • Resource count event. If this event is signalled, all resources are used and the code must exit (more on that later).
        startTime := DSiTimeGetTime64;
waitHandles[0] := obcCompletedSignal;
waitHandles[1] := obcObserver.GetEvent;
if assigned(obcResourceCount) then
waitHandles[2] := obcResourceCount.Handle;
Result := false;
while not (IsCompleted or Elapsed) do begin
if obcCollection.TryDequeue(value) then begin
Result := true;
break; //while
end;
awaited := WaitForMultipleObjects(IFF(assigned(obcResourceCount), 3, 2),
@waitHandles, false, TimeLeft_ms);
if awaited <> WAIT_OBJECT_1 then begin
if awaited = WAIT_OBJECT_2 then
CompleteAdding;
Result := false;
break; //while
end;
end;

If new value was enqueued into the dynamic queue, TryDequeue is called again. It is entirely possible that another thread calls that function first and removes the value causing TryDequeue to fail and WaitForMultipleObjects to be called again. Such is life in the multithreaded world.

Enumerating the blocking collection

TOmniBlockingCollection enumerator is slightly more powerful than the usual Delphi enumerator. In addition to the usual methods it contains function Take which is required by the Parallel architecture (see Parallel.For and Parallel.ForEach.Aggregate for more information).

  IOmniValueEnumerator = interface ['{F60EBBD8-2F87-4ACD-A014-452F296F4699}']
function GetCurrent: TOmniValue;
function MoveNext: boolean;
function Take(var value: TOmniValue): boolean;
property Current: TOmniValue read GetCurrent;
end; { IOmniValueEnumerator }
  TOmniBlockingCollectionEnumerator = class(TInterfacedObject,
IOmniValueEnumerator)
constructor Create(collection: TOmniBlockingCollection);
function GetCurrent: TOmniValue; inline;
function MoveNext: boolean; inline;
function Take(var value: TOmniValue): boolean;
property Current: TOmniValue read GetCurrent;
end; { TOmniBlockingCollectionEnumerator }

The implementation is trivial.

constructor TOmniBlockingCollectionEnumerator.Create(collection: TOmniBlockingCollection);
begin
obceCollection_ref := collection;
end; { TOmniBlockingCollectionEnumerator.Create }

function TOmniBlockingCollectionEnumerator.GetCurrent: TOmniValue;
begin
Result := obceValue;
end; { TOmniBlockingCollectionEnumerator.GetCurrent }

function TOmniBlockingCollectionEnumerator.MoveNext: boolean;
begin
Result := obceCollection_ref.Take(obceValue);
end; { TOmniBlockingCollectionEnumerator.MoveNext }

function TOmniBlockingCollectionEnumerator.Take(var value: TOmniValue): boolean;
begin
Result := MoveNext;
if Result then
value := obceValue;
end; { TOmniBlockingCollectionEnumerator.Take }

Example

A not-so-simple how to on using the blocking collection can be seen in the demo 34_TreeScan. It uses the blocking collection to scan a tree with multiple parallel threads. This demo works in Delphi 2007 and newer.

A better example of using the blocking collection is in the demo 35_ParallelFor. Actually, it uses the same approach as demo 34 to scan the tree, except that the code is implemented as an anonymous method which causes it to be much simpler than the D2007 version. Of course, this demo works only in Delphi 2009 and above.

This is the full parallel scanner from the 35_ParallelFor demo:

function TfrmParallelForDemo.ParaScan(rootNode: TNode; value: integer): TNode;
var
cancelToken: IOmniCancellationToken;
nodeQueue : IOmniBlockingCollection;
nodeResult : TNode;
numTasks : integer;
begin
nodeResult := nil;
cancelToken := CreateOmniCancellationToken;
numTasks := Environment.Process.Affinity.Count;
nodeQueue := TOmniBlockingCollection.Create(numTasks);
nodeQueue.Add(rootNode);
Parallel.ForEach(nodeQueue as IOmniValueEnumerable)
.NumTasks(numTasks) // must be same number of task as in
nodeQueue to ensure stopping

.CancelWith(cancelToken)
.Execute(
procedure (const elem: TOmniValue)
var
childNode: TNode;
node : TNode;
begin
node := TNode(elem.AsObject);
if node.Value = value then begin
nodeResult := node;
nodeQueue.CompleteAdding;
cancelToken.Signal;
end
else for childNode in node.Children do
nodeQueue.TryAdd(childNode);
end);
Result := nodeResult;
end; { TfrmParallelForDemo.ParaScan }

The code first creates a cancellation token which will be used to stop the Parallel.ForEach loop. Number of tasks is set to number of cores accessible from the process and a blocking collection is created. Resource count for this collection is initialized to the number of tasks (parameter to the TOmniBlockingCollection.Create). The root node of the tree is added to the blocking collection.

Then the Parallel.ForEach is called. The IOmniValueEnumerable aspect of the blocking collection is passed to the ForEach. Currently, this is the only way to provide ForEach with data. This interface just tells the ForEach how to generate enumerator for each worker thread. [At the moment, each worker requires a separate enumerator. This may change in the future.]

  IOmniValueEnumerable = interface ['{50C1C176-C61F-41F5-AA0B-6FD215E5159F}']
function GetEnumerator: IOmniValueEnumerator;
end; { IOmniValueEnumerable }

The code also passes cancellation token to the ForEach loop and starts the parallel execution (call to Execute). In each parallel task, the following code is executed (this code is copied from the full ParaScan example above):

      procedure (const elem: TOmniValue)
var
childNode: TNode;
node : TNode;
begin
node := TNode(elem.AsObject);
if node.Value = value then begin
nodeResult := node;
nodeQueue.CompleteAdding;
cancelToken.Signal;
end
else for childNode in node.Children do
nodeQueue.TryAdd(childNode);
end

The code is provided with one element from the blocking collection (ForEach takes care of that). If the Value field is the value we’re searching for, nodeResult is set, blocking collection is put into CompleteAdding state (so that enumerators in other tasks will terminate blocking wait (if any)) and ForEach is cancelled.

Otherwise (not the value we’re looking for), all the children of the current node are added to the blocking collection. TryAdd is used (and its return value ignored) because another thread may call CompleteAdding while the for childNode loop is being executed.

That’s all! There is a blocking collection into which nodes are put (via the for childNode loop) and from which they are removed (via the ForEach infrastructure). If child nodes are not provided fast enough, blocking collection will block on Take and one or more tasks may sleep for some time until new values appear. Only when the value is found, the blocking collection and ForEach loop are completed/cancelled.

This is very similar to the code that was my inspiration for writing the blocking collection:

var targetNode = …;
var bc = new BlockingCollection<Node>(startingNodes);
// since we expect GetConsumingEnumerable to block, limit parallelism to the number of
// procs, avoiding too much thread injection
var parOpts = new ParallelOptions() { MaxDegreeOfParallelism = Enivronment.ProcessorCount };
Parallel.ForEach(bc.GetConsumingEnumerable(), parOpts, (node,loop) =>
{
    if (node == targetNode)
    {
        Console.WriteLine(“hooray!”);
        bc.CompleteAdding();
        loop.Stop();
    }
    else
    {
        foreach(var neighbor in node.Neighbors) bc.Add(neighbor);
    }
});

However, this C# code exhibits a small problem. If the value is not to be found in the tree, the code never stops. Why? All tasks eventually block in the Take method (because complete tree has been scanned) and nobody calls CompleteAdding and loop.Stop. Does the Delphi code contains the very same problem?

Definitely not! That’s exactly why the resource counter was added to the blocking collection!

If the blocking collection is initialized with number of resources greater then zero, it will allocate a resource counter in the constructor. This resource counter is allocated just before the thread blocks in TryTake and released after that. Each blocking wait in TryTake waits for this resource counter to become signalled. If all threads try to execute blocking wait, this resource counter drops to zero, signals itself and unblocks all TryTake calls!

This elegant solution has only one problem – resource counter must be initialized to the number of threads that will be reading from the blocking collection. That’s why in the code above (ParaScan) same number is passed to the blocking collection constructor (resource counter initialization) and to the ForEach.NumTasks method (number of parallel threads).

Download

TOmniBlockingCollection will be available in the OmniThreadLibrary 1.05, which will be released in few days.

For the impatient there is OTL 1.05 Release Candidate. The only code that will change between 1.05 RC and release are possible bug fixes.

Labels: , , , , ,

OmniThreadLibrary 1.05 Release Candidate

Next OTL release is coming soon. Brave souls are invited to download and test 1.05 RC. If you find any problem, make sure to report it in the OmniThreadLibrary forum or here in comments.

OmniThreadLibrary-1.05RC.zip
1.05RC tag in SVN
list of changes

Labels: ,

Thursday, February 18, 2010

Dynamic lock-free queue – doing it right

Some history required …

First there was a good idea with somewhat patchy implementation: Three steps to the blocking collection: [2] Dynamically allocated queue.

Then there was a partial solution, depending on me being able to solve another problem. Still, it was a good solution: Releasing queue memory without the MREW lock.

At the end, the final (actually, the original) problem was also solved: Bypassing the ABA problem.

And now to the results …

This article describes a lock-free, (nearly) O(1) insert/remove, dynamically allocated queue that doesn’t require garbage collector. It can be implemented on any hardware that supports 8-byte compare-and-swap operation (in Intel world, that means at least a Pentium). The code uses 8-byte atomic move in some parts but they can be easily changed into 8-byte CAS in case the platform doesn’t support such operation. In the current implementation, Move64 (8-byte move) function uses SSE2 instructions and therefore requires Pentium 4. The code, however, can be conditionally compiled with CAS64 instead of Move64 thus enabling it to run on Pentium 1 to 3. (See the notes in the code for more information). The code requires memory manager that allows the memory to be released in a thread different from the thread where allocation occurred. [Obviously, Windows on Intel platform satisfies all conditions.]

Although the dynamic queue has been designed with the OmniThreadLibrary (OTL for short) in mind, there’s also a small sample implementation that doesn’t depend on the OTL: GpLockFreeQueue.pas. This implementation can store int64 elements only (or everything you can cast into 8 bytes) while the OTL implementation from OtlContainers stores TOmniValue data. [The latter being a kind of variant record used inside the OTL to store “anything” from a byte to a string/wide string/object/interface.] Because of that, GpLockFreeQueue implementation is smaller, faster, but slightly more limited. Both are released under the BSD license.

Memory layout

Data is stored in slots. Each slot uses 16 bytes and contains byte-size tag, word-size offset and up to 13 bytes of data. The implementation in OtlContainers uses all of those 13 bytes to store TOmniValue while the implementation in GpLockFreeQueue uses only 8 bytes and keeps the rest unused.

The following notation is used to represent a slot: [tag|offset|value].

In reality, value field is first in the record because it must be 4-aligned. The reason for that will be revealed in a moment. In GpLockFreeQueue, a slot is defined as:

  TGpLFQueueTaggedValue = packed record
Value : int64;
Tag : TGpLFQueueTag;
Offset : word;
Stuffing: array [1..5] of byte;
end; { TGpLFQueueTaggedValue }

Slots do not stand by themselves; they are allocated in blocks. Default block size if 64 KB (4096 slots) but can be varied from 64 bytes (four slots) to 1 MB (65536 slots). In this article, I’ll be using 5-slot blocks, as they are big enough to demonstrate all the nooks and crannies of the algorithm and small enough to fit in one line of text.

During the allocation, each block is formatted as follows:

[Header|0|4] [Sentinel|1|0] [Free|2|0] [Free|3|0] [EndOfList|4|0]

The first slot is marked as a Header and has the value field initialized to “number of slots in the block minus one”. [The highest value that can be stored in the header’s value field is 65535; therefore the maximum number of slots in a block is 65536.] This value is atomically decremented each time a slot is dequeued. When the number drops to zero, block can be released. (More on that in: Releasing queue memory without the MREW lock.) InterlockedDecrement, which is used to decrement this value, requires its argument to be 4-aligned and that’s the reason for the value field to be stored first in the slot.

The second slot is a Sentinel. Slots from the third onwards are tagged Free and are used to store data. The last slot is tagged EndOfList and is used to link two blocks. All slots have the offset field initialized to the sequence number of the slot – in the Header this value is 0, in the Sentinel 1, and so on up to the EnndOfList with the value set to 4 (number of slots in the block minus 1). This value is used in the Dequeue to calculate the address of the header slot just before the header’s value is decremented.

In addition to dynamically allocated (and released) memory blocks, the queue uses head and tail tagged pointers. Both are 8-byte values, consisting of two 4-byte fields – slot and tag. The following notation is used to represent a tagged pointer: [slot|tag].

The slot field contains the address of the current head/tail slot while the tag field contains the tag of the current slot. The motivation behind this scheme is explained in the Bypassing the ABA problem post.

Tail and head pointers are modified using 8-byte CAS and Move commands and must therefore be 8-aligned.

By putting all that together, we get a snapshot of the queue state. This is the initial state of a queue with five-slot blocks:

Head:[B1:2|Free]
Tail:[B1:1|Sentinel]
B1:[Header|0|4] T:[Sentinel|1|0] H:[Free|2|0] [Free|3|0] [EndOfList|4|0]

The memory block begins at address B1 and contains five slots, initialized as described before. The tail pointer points to the second slot of block B1 (B1:1; I’m using the form address:offset), which is tagged Sentinel and the head pointer points to the third block (B1:2), the first Free slot. Here we see the sole reason for the Sentinel – it stands between the tail and the head when the queue is empty.

Enqueue

In theory, the enqueue operation is simple. The element is stored in the next available slot and queue head is advanced. In practice, however, multithreading makes things much more complicated.

To prevent thread conflicts, each enqueueing thread must first take ownership of the head. It does this by swapping queue head tag from Free to Allocating or from EndOfList to Extending. To prevent ABA problems, both head pointer and head tag are swapped with the same head pointer and new tag in one atomic 8-byte compare-and-swap.

Enqueue then does its work and at the end swaps (head pointer, tag) to (next head pointer, Free|EndOfList) which allows other threads to proceed with their enqueue operation.

Let’s start with the empty list.

Head:[B1:2|Free]
Tail:[B1:1|Sentinel]
B1:[Header|0|4] T:[Sentinel|1|0] H:[Free|2|0] [Free|3|0] [EndOfList|4|0]

Enqueue first swaps [B1:2|Free] with [B1:2|Allocating].

Head:[B1:2|Allocating]
Tail:[B1:1|Sentinel]
B1:[Header|0|4] T:[Sentinel|1|0] H:[Free|2|0] [Free|3|0] [EndOfList|4|0]

The green colour indicates an atomic change.

Only the head tag has changed, the data in the B1 memory block is not modified. Head still points to a slot tagged Free (slot B1:2). This is fine as enqueueing threads don’t take interest in this tag at all.

Data is then stored in the slot and its tag is changed to Allocated. This again makes no change to enqueuers as the head slot in the header was not updated yet. It also doesn’t allow the dequeue operation on this slot to proceed because the head is adjacent to the tail, which points to a Sentinel and in this case Dequeue treats the queue as empty (as we’ll see later).

Head:[B1:2|Allocating]
Tail:[B1:1|Sentinel]
B1:[Header|0|4] T:[Sentinel|1|0] H:[Allocated|2|42] [Free|3|0] [EndOfList|4|0]

Red colour marks “unsafe” modification.

At the end, the head is unlocked by storing address of the next slot (first free slot, B1:3) and next slot’s tag (Free).

Head:[B1:3|Free]
Tail:[B1:1|Sentinel]
B1:[Header|0|4] T:[Sentinel|1|0] [Allocated|2|42] H:[Free|3|0] [EndOfList|4|0]

Teal colour marks an atomic 8-byte move used to move new data into the head pointer. If the target platform doesn’t support such move, an 8-byte CAS could be used instead.

After those changes, head is pointing to the next free slot and data is stored in the queue.

Let’s assume that another Enqueue is called and stores number 17 in the queue. Nothing new happens here.

Head:[B1:4|EndOfList]
Tail:[B1:1|Sentinel]
B1:[Header|0|4] T:[Sentinel|1|0] [Allocated|2|42] [Allocated|3|17] H:[EndOfList|4|0]

The next Enqueue must do something new as there are no free slots in the current block. To extend the queue, thread first swaps the EndOfList tag with the Extending tag. By doing this, the thread takes ownership of the queue head.

Head:[B1:4|Extending]
Tail:[B1:1|Sentinel]
B1:[Header|0|4] T:[Sentinel|1|0] [Allocated|2|42] [Allocated|3|17] H:[EndOfList|4|0]

A new block gets allocated and initialized (see chapter on memory management, below).

Head:[B1:4|Extending]
Tail:[B1:1|Sentinel]
B1:[Header|0|4] T:[Sentinel|1|0] [Allocated|2|42] [Allocated|3|17] H:[EndOfList|4|0]
B2:[Header|0|4] [Sentinel|1|0] [Free|2|0] [Free|3|0] [EndOfList|4|0]

Data is stored in the first free slot of the block B2.

Head:[B1:4|Extending]
Tail:[B1:1|Sentinel]
B1:[Header|0|4] T:[Sentinel|1|0] [Allocated|2|42] [Allocated|3|17] H:[EndOfList|4|0]
B2:[Header|0|4] [Sentinel|1|0] [Allocated|2|57] [Free|3|0] [EndOfList|4|0]

Last slot of block B1 is modified to point to the first element in the second slot of the next block (Sentinel). Also, a tag BlockPointer is stored into that slot.

Head:[B1:4|Extending]
Tail:[B1:1|Sentinel]
B1:[Header|0|4] T:[Sentinel|1|0] [Allocated|2|42] [Allocated|3|17] H:[BlockPointer|4|B2:1]
B2:[Header|0|4] [Sentinel|1|0] [Allocated|2|57] [Free|3|0] [EndOfList|4|0]

At the end, the head is updated to point to the first free slot (B2:3).

Head:[B2:3|Free]
Tail:[B1:1|Sentinel]
B1:[Header|0|4] T:[Sentinel|1|0] [Allocated|2|42] [Allocated|3|17] [BlockPointer|4|B2:1]
B2:[Header|0|4] [Sentinel|1|0] [Allocated|2|57] H:[Free|3|0] [EndOfList|4|0]

That completes the Enqueue. List head is now unlocked.

The actual code is not more complicated than this description (code taken from GpLockFreeQueue).

procedure TGpLockFreeQueue.Enqueue(const value: int64);
var
extension: PGpLFQueueTaggedValue;
next : PGpLFQueueTaggedValue;
head : PGpLFQueueTaggedValue;
begin
repeat
head := obcHeadPointer.Slot;
if (obcHeadPointer.Tag = tagFree)
and CAS64(head, Ord(tagFree), head, Ord(tagAllocating), obcHeadPointer^)
then
break //repeat
else if (obcHeadPointer.Tag = tagEndOfList)
and CAS64(head, Ord(tagEndOfList), head, Ord(tagExtending), obcHeadPointer^)
then
break //repeat
else // very temporary condition, retry quickly
asm pause; end;
until false;
if obcHeadPointer.Tag = tagAllocating then begin // enqueueing
next := NextSlot(head);
head.Value := value;
head.Tag := tagAllocated;
Move64(next, Ord(next.Tag), obcHeadPointer^); // release the lock
end
else begin // allocating memory
extension := AllocateBlock; // returns pointer to the header
Inc(extension, 2); // move over header and sentinel to the first data slot
extension.Tag := tagAllocated;
extension.Value := value;
Dec(extension); // forward reference points to the sentinel
head.Value := int64(extension);
head.Tag := tagBlockPointer;
Inc(extension, 2); // get to the first free slot
Move64(extension, Ord(extension.Tag), obcHeadPointer^); // release the lock
PreallocateMemory; // preallocate memory block
end;
end; { TGpLockFreeQueue.Enqueue }

Dequeue

Enqueue is simple but Dequeue is a whole new bag of problems. It has to handle the Sentinel slot and because of that there are five possible scenarios:

  1. Skip the sentinel.
  2. Read the data (tail doesn’t catch the head).
  3. Read the data (tail does catch the head).
  4. The queue is empty.
  5. Follow the BlockPointer tag.

To prevent thread conflicts, dequeueing thread takes ownership of the tail. It does this by swapping the tail tag from Allocated or Sentinel to Removing or from BlockPointer to Destroying. Again, those changes are done atomically by swapping both tail pointer and tail tag in one go.

Let’s walk through all five scenarios now.

1 – Skip the sentinel

Let’s start with a queue state where two slots are allocated and head points to the EndOfList slot.

Head:[B1:4|EndOfList]
Tail:
[B1:1|Sentinel]
B1:[Header|0|4] T:[Sentinel|1|0] [Allocated|2|42] [Allocated|3|17] H:[EndOfList|4|0]

The code first locks the tail.

Head:[B1:4|EndOfList]
Tail:[B1:1|Removing]
B1:[Header|0|4] T:[Sentinel|1|0] [Allocated|2|42] [Allocated|3|17] H:[EndOfList|4|0]

As there is no data in the Sentinel slot, the tail is immediately updated to point to the next slot.

Head:[B1:4|EndOfList]
Tail:[B1:2|Allocated]
B1:[Header|0|4] [Sentinel|1|0] T:[Allocated|2|42] [Allocated|3|17] H:[EndOfList|4|0]

There’s no need to update the tag in slot 1 as no other thread can reach it again. Because the slot is now unreachable, the code now decrements the count in the B1’s Header slot (from 4 to 3).

Head:[B1:4|EndOfList]
Tail:
[B1:2|Allocated]
B1:[Header|0|3] [Sentinel|1|0] T:[Allocated|2|42] [Allocated|3|17] H:[EndOfList|4|0]

Because the original tag was Sentinel, the code retries from beginning immediately. The queue is now in scenario 2 (data, the tail is not immediately before the head).

2 - Read the data (tail doesn’t catch the head)

Again, the tail is locked.

Head:[B1:4|EndOfList]
Tail:[B1:2|Removing]
B1:[Header|0|3] [Sentinel|1|0] T:[Allocated|2|42] [Allocated|3|17] H:[EndOfList|4|0]

The code then reads the value from the slot (42) and advances the tail to the slot B1:3.

Head:[B1:4|EndOfList]
Tail:[B1:3|Allocated]
B1:[Header|0|3] [Sentinel|1|0] [Allocated|2|42] T:[Allocated|3|17] H:[EndOfList|4|0]

Again, there is no need to change the slot tag. The slot 2 is now unreachable and the Header count is decremented.

Head:[B1:4|EndOfList]
Tail:
[B1:3|Allocated]
B1:[Header|0|2] [Sentinel|1|0] [Allocated|2|42] T:[Allocated|3|17] H:[EndOfList|4|0]

The code has retrieved the data and can now return from the Dequeue method.

3 - Read the data (tail does catch the head)

If the Dequeue is now called for the second time, we have the scenario 3 – there is data in the queue, but the head pointer is next to the tail pointer. Because of the, the tail cannot be incremented. Instead of that, the code replaces the tail slot tag with the Sentinel.

It is entirely possible that the head will change the very next moment which means that the Sentinel would not be really needed. Luckily, that doesn’t hurt much – the next Dequeue would skip the Sentinel, retry and fetch the next element from the queue.

The code starts in a well-known manner, by taking ownership of the tail.

Head:[B1:4|EndOfList]
Tail:[B1:3|Removing]
B1:[Header|0|2] [Sentinel|1|0] [Allocated|2|42] T:[Allocated|3|17] H:[EndOfList|4|0]
 

The code then reads the value from the slot, but because the head was next to tail when Dequeue was called, the code doesn’t increment the tail and doesn’t decrement the Header counter. Instead of that, the Sentinel tag is put into the head tag.

Head:[B1:4|EndOfList]
Tail:
[B1:3|Sentinel]
B1:[Header|0|2] [Sentinel|1|0] [Allocated|2|42] T:[Allocated|3|17] H:[EndOfList|4|0]
 

It doesn’t matter that the slot tag is still Allocated as no-one will read it again.

4 - The queue is empty

If the Dequeue would be called now, it would return immediately with status empty because the tail tag is Sentinel and because the tail has caught the head.

5 - Follow the BlockPointer tag

In the last scenario, the tail is pointing to a BlockPointer.

Head:[B2:3|Free]
Tail:[B1:4|EndOfList]
B1:[Header|0|1] [Sentinel|1|0] [Allocated|2|42] [Allocated|3|17] T:[BlockPointer|4|B2:1]
B2:[Header|0|4] [Sentinel|1|0] [Allocated|2|57] H:[Free|3|0] [EndOfList|4|0]

As expected, the code first takes the ownership of the tail.

Head:[B2:3|Free]
Tail:[B1:4|Destroying]
B1:[Header|0|1] [Sentinel|1|0] [Allocated|2|42] [Allocated|3|17] T:[BlockPointer|4|B2:1]
B2:[Header|0|4] [Sentinel|1|0] [Allocated|2|57] H:[Free|3|0] [EndOfList|4|0]

We know that the first slot in the next block is Sentinel. We also know that the head is not pointing to this slot because that’s how Enqueue works (when new block is allocated, head points to the first slot after the Sentinel.). Therefore, it is safe to update the tail to point to the Sentinel slot of the B2 block.

Head:[B2:3|Free]
Tail:[B2:1|Sentinel]
B1:[Header|0|1] [Sentinel|1|0] [Allocated|2|42] [Allocated|3|17] [BlockPointer|4|B2:1]
B2:[Header|0|4] T:[Sentinel|1|0] [Allocated|2|57] H:[Free|3|0] [EndOfList|4|0]

By doing the swap, the ownership of the tail is released.

The Header count is then decremented.

Head:[B2:3|Free]
Tail:[B2:1|Sentinel]
B1:[Header|0|0] [Sentinel|1|0] [Allocated|2|42] [Allocated|3|17] [BlockPointer|4|B2:1]
B2:[Header|0|4] T:[Sentinel|1|0] [Allocated|2|57] H:[Free|3|0] [EndOfList|4|0]

Because the count is now zero, the code destroys the B1 block. (Note that the Header count decrement is atomic and only one thread can actually reach the zero.) While the block is being destroyed, other threads may be calling Dequeue.

Head:[B2:3|Free]
Tail:[B2:1|Sentinel]
B1:[Header|0|0] [Sentinel|1|0] [Allocated|2|42] [Allocated|3|17] [BlockPointer|4|B2:1]
B2:[Header|0|4] T:[Sentinel|1|0] [Allocated|2|57] H:[Free|3|0] [EndOfList|4|0]

Because the tail tag was originally BlockPointer, the code retries immediately and continues with the scenario 1.

The actual code is tricky because some of the code path is shared between scenarios (code taken from GpLockFreeQueue).

function TGpLockFreeQueue.Dequeue(var value: int64): boolean;
var
caughtTheHead: boolean;
tail : PGpLFQueueTaggedValue;
header : PGpLFQueueTaggedValue;
next : PGpLFQueueTaggedValue;
tag : TGpLFQueueTag;
begin
tag := tagSentinel;
Result := true;
while Result and (tag = tagSentinel) do begin
repeat
tail := obcTailPointer.Slot;
caughtTheHead := NextSlot(obcTailPointer.Slot) = obcHeadPointer.Slot;
if (obcTailPointer.Tag = tagAllocated)
and CAS64(tail, Ord(tagAllocated), tail, Ord(tagRemoving), obcTailPointer^) then
begin
tag := tagAllocated;
break; //repeat
end
else if (obcTailPointer.Tag = tagSentinel) then begin
if caughtTheHead then begin
Result := false;
break; //repeat
end
else if CAS64(tail, Ord(tagSentinel), tail, Ord(tagRemoving), obcTailPointer^) then begin
tag := tagSentinel;
break; //repeat
end
end
else if (obcTailPointer.Tag = tagBlockPointer)
and CAS64(tail, Ord(tagBlockPointer), tail, Ord(tagDestroying), obcTailPointer^) then
begin
tag := tagBlockPointer;
break; //repeat
end
else
asm pause; end;
until false;
if Result then begin // dequeueing
header := tail;
Dec(header, header.Offset);
if tag in [tagSentinel, tagAllocated] then begin
next := NextSlot(tail);
if tag = tagAllocated then // sentinel doesn't contain any useful value
value := tail.Value;
if caughtTheHead then begin // release the lock; as this is the last element, don't move forward
Move64(tail, Ord(tagSentinel), obcTailPointer^);
header := nil; // do NOT decrement the counter; this slot will be retagged again
end
else
Move64(next, Ord(next.Tag), obcTailPointer^); // release the lock
end
else begin // releasing memory
next := PGpLFQueueTaggedValue(tail.Value); // next points to the sentinel
Move64(next, Ord(tagSentinel), obcTailPointer^); // release the lock
tag := tagSentinel; // retry
end;
if assigned(header) and (InterlockedDecrement(PInteger(header)^) = 0) then
ReleaseBlock(header);
end;
end; //while Result and (tag = tagSentinel)
end; { TGpLockFreeQueue.Dequeue }

Memory management

In the dynamic queue described above, special consideration goes to memory allocation and deallocation because most of the time that will be the slowest part of the enqueue/dequeue.

Memory is always released after the queue tail is unlocked. That way, other threads may dequeue from the same queue while the thread is releasing the memory.

The allocation is trickier, because the Enqueue only knows that it will need the memory after the head is locked. The trick here is to use one preallocated memory block which is reused inside the Enqueue. This is much faster than calling the allocator. After the head is unlocked, Enqueue preallocates next block of memory. This will slow down the current thread, but will not block other threads from enqueueing into the same queue.

Dequeue also tries to help with that. If the preallocated block is not present when a block must be released, Dequeue will store the released block away for the next Enqueue to use.

Also, there's one such block preallocated when the queue is initially created.

If this explanation is unclear, look at the program flow below. It describes the code flow through the Enqueue that has to allocate a memory block and through the Dequeue that has to release a memory block. Identifiers in parenthesis represent methods listed below.

Enqueue:

  • lock the head
  • detect EndOfList
  • use the cached block if available, otherwise allocate a new block (AllocateBlock)
  • unlock the head
  • if there is no cached block, allocate new block and store it away (PreallocateMemory)

Dequeue:

  • lock the tail
  • process last slot in the block
  • unlock the tail
  • decrement the header count
  • as the header count has dropped to zero:
    • if the cached block is empty, store this one away (ReleaseBlock)
    • otherwise release the block

All manipulations with the cached block are done atomically. All allocations are optimistic – if the preallocated block is empty, new memory block is allocated, partitioned and only then the code tries to swap it into the preallocated block variable. If compare-and-swap fails at this point, other thread went through the same routine, just slightly faster, and the allocated (and partitioned) block is thrown away. Looks like there may be quite some work done in vain but in reality the preallocated block is rarely thrown away.

It tested other, more complicated schemes (for example small 4-slot stack) but they invariably behaved worse than this simple approach.

function TGpLockFreeQueue.AllocateBlock: PGpLFQueueTaggedValue;
var
cached: PGpLFQueueTaggedValue;
begin
cached := obcCachedBlock;
if assigned(cached) and CAS32(cached, nil, obcCachedBlock) then
Result := cached
else begin
Result := AllocMem(obcBlockSize);
PartitionMemory(Result);
end;
end; { TGpLockFreeQueue.AllocateBlock }


procedure
TGpLockFreeQueue.PreallocateMemory;
var
memory: PGpLFQueueTaggedValue;
begin
if not assigned(obcCachedBlock) then begin
memory := AllocMem(obcBlockSize);
PartitionMemory(memory);
if not CAS32(nil, memory, obcCachedBlock) then
FreeMem(memory);
end;
end; { TGpLockFreeQueue.PreallocateMemory }

procedure TGpLockFreeQueue.ReleaseBlock(firstSlot: PGpLFQueueTaggedValue; forceFree: boolean);
begin
if forceFree or assigned(obcCachedBlock) then
FreeMem(firstSlot)
else begin
ZeroMemory(firstSlot, obcBlockSize);
PartitionMemory(firstSlot);
if not CAS32(nil, firstSlot, obcCachedBlock) then
FreeMem(firstSlot);
end;
end; { TGpLockFreeQueue.ReleaseBlock }

As you can see in the code fragments above, memory is also initialized (formatted into slots) when memory is allocated. This also helps with the general performance.

Performance

Tests were again performed using the 32_Queue project in the Tests branch of the OTL tree.

The test framework sets up the following data path:

source queue –> N threads –> channel queue –> M threads –> destination queue

Source queue is filled with numbers from 1 to 1.000.000. Then 1 to 8 threads are set up to read from the source queue and write into the channel queue and another 1 to 8 threads are set up to read from the channel queue and write to the destination queue. Application then starts the clock and starts all threads. When all numbers are moved to the destination queue, clock is stopped and contents of the destination queue are verified. Thread creation time is not included in the measured time.

All in all this results in 2 million reads and 2 million writes distributed over three queues. Tests are very brutal as all threads are just hammering on the queues, doing nothing else. The table below contains average, min and max time of 5 runs on a 2.67 GHz computer with two 4-core CPUs. Data from the current implementation ("new code") is compared to the original implementation ("old code"). Best times are marked green.

  New code Old code
  average [min-max]
all data in milliseconds
millions of queue operations per second average [min-max]
all data in milliseconds
millions of queue operations per second
N = 1, M = 1 590 [559 – 682] 6.78 707 [566-834] 5.66
N = 2, M = 2 838 [758 – 910] 4.77 996 [950-1031] 4.02
N = 3, M = 3 1095 [1054 – 1173] 3.65 1065 [1055-1074] 3.76
N = 4, M = 4 1439 [1294 – 1535] 2.78 1313 [1247-1358] 3.04
N = 8, M = 8 1674 [1303 – 2217] 2.39 1520 [1482-1574] 2.63
N = 1, M = 7 1619 [1528 – 1822] 2.47 3880 [3559-4152] 1.03
N = 7, M = 1 1525 [1262 – 1724] 2.62 1314 [1299-1358] 3.04

The new implementation is faster when less threads are used and slightly slower when number of threads increases. The best thing is that there is no weird speed drop in N = 1, M = 7 case. The small slowdown with higher number of threads doesn't bother me much as this test case really stresses the queue. In all practical applications, there should be much more code that does real work and queue load would rapidly drop down.

If your code depends on accessing a shared queue from many multiple threads that enqueue/dequeue most of the time, there's a simple solution - change the code! I believe that multithreaded code should not fight for each data, but cooperate. A possible solution is to split the data in packets and schedule packets to the shared queue. Each thread would then dequeue one packet and process all data stored within.

Wrapup

The code will be released in OmniThreadLibrary 1.5 (but you can use it already if you fetch the HEAD from the SVN). It passed very rigorous stress test and I believe it is working. If you find any problems, please let me know. I’m also interested in any ports to different languages (a C version would be nice).

Labels: , , , , ,

Wednesday, February 10, 2010

Bypassing the ABA problem

On Saturday me and my wife visited a classical music concert. Although I like this kind of music, the particular combination of instruments (harp and violin) didn’t really interest me that much, especially when they played modernist Slovenian composers. [Debussy, on the other hand, was superb.]

Anyway, I got submerged into music and half of my brain switched of and then I got all sorts of weird programming ideas. The first was how to solve the ABA problem in the initial dynamic queue implementation. [Total failure, that idea, it didn’t work at all.] The second, however, proved to be very useful as it solved the memory release problem (provided that ABA gets fixed, of course).

The new memory release scheme brought with it a new strength of will. If I had solved that one, then maybe, just maybe, I can also solve the ABA problem, I thought to myself and returned to the code. And then it dawned on me …

The problem with the initial implementation was that the head/tail pointer and corresponding tag were access asynchronously. In the multithreading environment, that is always a problem. Somehow I had to modify them at the same time, but that didn’t look feasible as the tag was living in the dynamically allocated block and the head/tail pointer was stored in the object itself. I couldn’t put the head/tail into the block, but I could put a tag near to the head/tail pointer! [A copy of the tag, actually, as I still needed the tags to be stored in the data block.] Then I could use 8-byte compare-and-swap to change both the pointer and the tag at the same time!

There was one problem, though. In the initial implementation, the tail was allowed to catch the head. If that happened with the new scheme, both head and tail pointers would be the same (and pointing to a tagFree slot) but the first enqueue operation would only modify the head tag, although the tail tag would in reality also change! It seemed like I was just pushing the ABA problem from place to place :(

Still, there is a simple (at least for some values of that word) solution to such problems – introduce the sentinel. This is a special element signifying that some pointer (tail, in my case) has reached the end of list. A good idea, but could it be made to work?

I fired up my trusty spreadsheed (very good stuff for simulations) and in few hours I had the basic plan laid out.

image

[Yes, that’s the picture from the yesterday’s teaser.]

It was a really simple work to convert this to the code. After fixing few bugs, I had the new queue running, faster then ever before!

I’ll put together a long article describing all the tricks inside the new dynamic queue, but that will take some time, sorry. In the meantime, you can checkout the current OtlContainers and read the pseudo-code documentation.

TOmniQueue
===============

tags:
  tagFree
  tagAllocating
  tagAllocated
  tagRemoving
  tagEndOfList
  tagExtending
  tagBlockPointer
  tagDestroying
  tagHeader
  tagSentinel

header contains:
  head
    slot = 4 bytes
    tag  = 4 bytes
  tail
    slot = 4 bytes
    tag  = 4 bytes
all are 4-aligned

slot contains:
  TOmniValue = 13 bytes
  tag        = 1 byte
  offset     = 2 bytes
TOmniValues are 4-aligned

block is initialized to:
[tagHeader, num slots - 1, 0] [tagSentinel, 0, 1] [tagFree 0, 2] .. [tagFree, 0, num slots - 2] [tagEndOfList, 0, num slots - 1]

Enqueue:
  repeat
      tail = header.tail.slot
      old_tag = header.tail.tag
      if header.tail.CAS(tail, tagFree, tail, tagAllocating) then
          tail.tag = tagAllocating
          break
      else if header.tail.CAS(tail, tagEndOfList, tail, tagExtending) then
          tail.tag = tagExtending
          break
      else
          yield
  forever
  if old_tag = tagFree then
      store <value, tagAllocated> into slot
      header.tail.CAS(tail, tagAllocating, NextSlot(tail), NextSlot(tail).tag)
  else
      allocate block // from cache, if possible
      next = second data slot in the new block
      set next to <tagAllocated, value>
      set last slot in the original block to <new block address, tagBlockPointer>
      header.tail.CAS(tail, tagExtending, next, next.tag)
      // queue is now unlocked
      preallocate block

Dequeue:
  repeat
      if header.head.tag = tagFree then
          return false
      head = header.head.slot
      old_tag = header.head.tag
      caughtTheTail := NextSlot(header.head.slot) = header.tail.slot;
      if head.head.CAS(head, tagAllocated, head, tagRemoving) then
          head.tag = tagRemovings
          break
      else if header.head.Tag = tagSentinel then
          if caughtTheTail then
              return false
          else if header.head.CAS(head, tagSentinel, head, tagRemoving) then
              head.tag = tagRemoving
              break
      else if header.head.CAS(head, tagBlockPointer, head, tagDestrogin) then
          head.tag = tagDestroying
          break
      else
          yield
  forever
  firstSlot = head - head.Offset // point to first slot
  if old_tag in [tagSentinel, tagAllocated] then
      next = NextSlot(head)
      if tag = tagAllocated then
          fetch stored value
      if caughtTheTail then
          header.head.CAS(head, tagRemoving, head, tagSentinel)
          firstSlot = nil // do not decrement the header counter
      else
          header.head.CAS(head, tagRemoving, next, next.tag)
  else
      next = head.value // points to the next block's sentinel
      header.head.CAS(head, tagDestroying, next, tagSentinel)
      old_tag = tagSentinel // force retry
  // queue is now unlocked
  if assigned(firstSlot) and (InterlockedDecrement(firstSlot.value) = 0) then
      release block
  if old_tag = tagSentinel
      retry from beginning

Labels: , , , , ,

Technical problems

For the last two days, the Delphi Geek was down due to a faulty hard drive and a RAID that didn’t want to rebuild itself when a new drive was inserter :(

Data has been restored from the latest backup, two last articles reposted and now everything should be in order. If you find any problems, please notify me in comments.

I apologize for the inconvenience.

Labels:

Sunday, February 07, 2010

The ABA problem

queue

I have discovered a truly remarkable solution which this margin is too small to contain.

Devil

Jokes aside, I’m running the stress test suite now and the results look good. The queue is even faster than before. If the test survives 12 hours, I’ll check the code into the SVN. (And then I’ll write the article. Promise.)

Labels: ,

Releasing queue memory without the MREW lock

I know how to implement a no-wait release in my dynamic queue if somehow the ABA problem gets solved. (I also had some ideas on how to solve the ABA problem if the tail pointer never catches the head one. But that’s still very much in the design phase.)

Each block gets a header element (with a tagHeader tag). Each slot in the block uses the previously unused bytes (stuffing) to store its position (index) inside the block.

[Header|0|1023] [Free|1|0] … [Free|1022|0] [EndOfList|1023|0]

A number of all not-yet-released slots is stored in the header’s value field.

The second part of the Dequeue code is changed:

if tag = tagAllocated then
    get value
    increment tail
else
    set tail to new block's slot 1
    get value
// new code that executes in all code paths:
interlocked decrement number of not-yet-released slots in the header
if the decrement resulted in value 0, release the block

So what is going on here?

  • Additional index in the previously unused bytes is used to quickly jump to the header slot.
  • Decrement-and-test is executed last in the Dequeue code and we know that the code won’t reference the current block anymore. Therefore it is safe to release the block at this point.
  • Ever change (tagAllocated –> tagReleasing, tagBlockPointer –> tagDestroying) decrements this counter. Only when all tags are set to tagReleasing/tagDestroying the block is destroyed.
  • We know that at this point no other thread may be referencing the block (because it has already decremented the counter – otherwise the counter wouldn’t be 0 yet).

I tested this approach by putting initial tag switching (tagFree –> tagAllocating etc) into a critical section, thusly bypassing the ABA problem. It works but the critical section really killed the performance when number of threads got high (N = 4, M = 4 case worked well, N = 8, M = 8 did not).

Labels: , ,

Thursday, February 04, 2010

Three steps to the blocking collection: [2] Dynamically allocated queue

[Step one: Inverse semaphore.]

When I started thinking about the blocking collection internals (see step one) two facts become fairly obvious:

  • The underlying data storage should be some kind of a queue. Blocking collection only needs the data storage to support enqueue (Add) and dequeue (Take).
  • The underlying data storage should be allocated dynamically. Users will be using blocking collection on structures for which the size cannot be determined quickly (trees, for example) and therefore the code cannot preallocate »just big enough« data storage.

I was all set on using locking queue for the data storage but then I had an idea about how to implement dynamically allocated queue with microlocking. Instead of locking the whole queue, each thread would lock only one element, and that for such short time that other competing threads would just wait in busy-wait (spinning in a tight loop).

Then the usual thing happened. I run into the ABA problem. And I solved it – in a way. The queue works, behaves well and is extremely useful. It’s just not as perfect as I thought it would be.

So here it is - microlocking, (mostly) O(1) insert/remove, dynamically allocated queue with garbage collector. All yours for a measly 16 bytes per one unit of data. Hey, you have to pay the price at some point!

[I “discovered” this approach all by myself. That doesn’t mean that this is an original work; most probably this is just a variation of some well known method. If you know of any similar approach, practical or only theoretical, please post the link in comments.]

Tagged elements

The basic queue element is made of two parts, a tag and a value. As this implementation is to be used in the OmniThreadLibrary framework, the value is represented by a TOmniValue record. This record can handle almost anything from a byte to an int64, and can also store interfaces. The only downside is that it uses 13 bytes of memory.

As the tag uses only one byte and 1+13 = 14, which is not an elegant value, a queue element also contains two unused bytes. That rounds its size to a pretty 16 bytes. [I’m joking, of course. There is a very good reasons why the size must be divisible by 4. I’ll come back to that later.]

type
TOmniQueueTag = (tagFree, tagAllocating, tagAllocated, tagRemoving,
tagEndOfList, tagExtending, tagBlockPointer, tagDestroying);
TOmniTaggedValue = packed record
Tag : TOmniQueueTag;
Stuffing: word;
Value : TOmniValue;
end;

In the following expose, I’ll use shorthand [tag|value] to represent an instance of the TOmniTaggedValue record.

Queue data is managed in blocks. The size of a block is 64 KB. Divide this by 16 and you’ll find that a block contains 4096 elements. Upon allocation, each block is formatted as

[Free|0] [Free|0] … [Free|0] [EndOfList|0]

In other words, block is mostly initialized to zero (as Ord(tagFree) = 0). When the queue object is created, one block is allocated and both tail and head pointers point to the first element.

H:T:[Free|0] [Free|0] … [Free|0] [EndOfList|0]


Enqueue

The first thing Enqueue does is to lock the head element. To do this it first checks if head points to a tagFree or tagEndOfList. If that’s not the case, another thread has just locked this element and the current thread must wait a little and retry.

Then it (atomically!) swaps current tag value with either tagAllocating (if previous value was tagFree) or tagExtending (if it was tagEndOfList). If this atomic swap fails, another thread has overtaken this one and the thread has to retry from beginning.

But let’s assume that the tag was properly swapped. We now have a following situation:

H:T:[Allocating|0] [Free|0] … [Free|0] [EndOfList|0]

The thread then increments the head pointer to the next slot …

H:[Allocating|0] T:[Free|0] … [Free|0] [EndOfList|0]

… and stores  [Allocated|value] in the slot it has previously locked.

H:[Allocated|value] T:[Free|0] … [Free|0] [EndOfList|0]

That completes the Enqueue.

In pseudocode:

repeat
    fetch tag from current head
    if tag = tagFree and CAS(tag, tagAllocating) then
        break
    if tag = tagEndOfList and CAS(tag, tagExtending) then
        break
    yield 
forever 
if tag = tagFree then 
    increment head 
    store (tagAllocated, value) into locked slot 
else 
    // ignore this for a moment

Let’s think about possible problems.


  1. Two (or more) threads can simultaneously find out that head^.tag = tagFree and try to swap in tagAllocating. As this is implemented using atomic compare-and-swap, one thread will succeed and another will fail and retry. No problem here.

  2. The thread increments the head pointer. At that moment it is suspended and another thread calls the Enqueue. The second thread finds that the head points to a free element and continues with execution. It’s possible that the second thread will finish the operation before the first thread is resumed and that for some short time the queue storage would look like this:
        H:[Allocating|0] [Allocated|value] T:[Free|0] … [Free|0] [EndOfList|0]
    Again, no problem here. Dequeue will take care of this situation.

Another interesting situation occurs when head is pointing to the last element in the block, the one with the EndOfList tag. In this case, new block is allocated and current element is changed so that it points to the new block.

if tag = tagFree then
    // we covered that already
else // tag = tagEndOfList
    allocate and initialize new block
    set head to new block's slot 1
    store (tagAllocated, value) into new block's slot 0
    store (tagBlockPointer, pointer to new block) into locked slot

After that, memory is laid out as follows:

H:[Allocated|value] [Allocated|value] … [Allocated|value] [BlockPointer|B2]
B2:[Allocated|value] T:[Free|0] … [Free|0] [EndOfList|0]

Compare and Swap

In the pseudo-code above I’ve used the CAS method as if it’s something that everybody on this world knows and loves, but probably it deserves some explanation.

CAS, or Compare-and-Swap, is an atomic function that compares some memory location with a value and puts in a new value if memory location was equal to that value. Otherwise, it does nothing.  And the best thing is that all this behaviour executes atomically. In other words – if two threads are attempting to CAS the same destination, only one of them will succeed.

In plain Delphi, CAS could be written as

function CAS32(const oldValue, newValue: cardinal; var PCardinal): boolean;
begin
EnterCriticalSection(cs);
Result := (destination^ = oldValue);
if Result then
destination^ := newValue;
LeaveCriticalSection(cs);
end;

However, that is quite slow so in reality we go down to the hardware and use lock cpmxchg operation that was designed exactly for this purpose.
function CAS32(const oldValue, newValue: cardinal; var destination): boolean;
asm
lock cmpxchg dword ptr [destination], newValue
setz al
end;

The Win32 function InterlockedCompareExchange implements the same behaviour, except that it is slower than the assembler version.

The Tag field of the TOmniTaggedValue  record uses only one byte of storage. The CAS32 function requires four bytes to be compared and swapped. [Even more, those 4 bytes must be 4-aligned. That’s why SizeOf(TOmniTaggedValue) is 16 – so that each Tag falls on a memory location whose address is evenly divisible by 4.]

Therefore, TOmniTaggedValue record implements the CASTag function which does some bit-fiddling to work around the problem.

function TOmniTaggedValue.CASTag(oldTag, newTag: TOmniQueueTag): boolean;
var
newValue: DWORD;
oldValue: DWORD;
begin
oldValue := PDWORD(@Tag)^ AND $FFFFFF00 OR DWORD(ORD(oldTag));
newValue := oldValue AND $FFFFFF00 OR DWORD(Ord(newTag));
Result := CAS32(oldValue, newValue, Tag);
end; { TOmniTaggedValue.CASTag }

First, we need an “old” 4-byte value. It is constructed by taking the oldTag parameter and OR-ing it with bytes 2, 3, and 4 of the record.

Then we need a “new” 4-byte value, which is constructed in a similar way.

Only then can we call the CAS32 function to compare-and-swap “old” value with the “new” one.

Dequeue

Dequeue is not much different from the Enqueue. First it locks the tail element by swapping it from tagAllocated to tagRemoving (a normal element) or from tagBlockPointer to tagDestroying (a pointer to the next block). If the tag is tagFree, then the queue is empty and Dequeue can return. In all other cases, it will loop and retry.

If tagAllocated was found, Dequeue can remove the current element in two easy steps. Firstly it increments the tail pointer and with that unlocks the queue tail. Only then it fetches the value from the queue. Tag is not modified at all.

In pseudocode:

  repeat
      fetch tag from current tail
      if tag = tagFree then
          return Empty
      if tag = tagAllocated and CAS(tag, tagRemoving) then
          break
      if tag = tagBlockPointer and CAS(tag, tagDestroying) then 
          break

    yield
  forever
  if tag = tagAllocated then 
      get value
      increment tail
  else
      // ignore this for a moment

Let’s look at a simple example. Assume that there are two elements in the queue originally and that Dequeue was just called.

H:[Allocated|value1] [Allocated|value2] T:[Free|0] … [EndOfList|0]

Dequeue first swaps the tail tag.

H:[Removing|value1] [Allocated|value2] T:[Free|0] … [EndOfList|0]

Then it increments the tail pointer.

[Removing|value1] H:[Allocated|value2] T:[Free|0] … [EndOfList|0]

At this moment, another thread may drop in and start dequeueing.

[Removing|value1] H:[Removing|value2] T:[Free|0] … [EndOfList|0]

It is entirely possible that the second thread will finish before the first one. The queue is empty now although the first thread has not yet completed its dequeue.

[Removing|value1] [Removing|value2] H:T:[Free|0] … [EndOfList|0]

First thread then continues execution, fetches the value from the slot and exits.

End-of-block pointer handling is only slightly more complicated.

  if tag = tagAllocated then 
      // we covered that already
  else
      // we know that the first slot in new block is allocated 
      set tail to new block's slot 1
      get value

Assume the following situation:

[Removing|value] [Removing|value] … [Removing|value] H:[BlockPointer|B2]
B2:[Allocated|value] T:[Free|0] … [Free|0] [EndOfList|0]

Dequeue first swaps tagBlockPointer with tagDestroying.

[Removing|value] [Removing|value] … [Removing|value] H:[Destroying|B2]
B2:[Allocated|value] T:[Free|0] … [Free|0] [EndOfList|0]

Then it follows the pointer to the next block. It knows that the first slot in this block will be allocated (because that’s how Enqueue is implemented) and moves the tail pointer directly to the second slot. By doing this, the tail pointer is released.. This is entirely safe to do as no other thread could have locked the tail slot in the meantime because it contains tag Destroying.

[Removing|value] [Removing|value] … [Removing|value] [Destroying|B2]
B2:[Allocated|value] H:T:[Free|0] … [Free|0] [EndOfList|0]

Last, the Dequeue fetches the value and exits.

That’s all – the tail pointer was safely moved to the new block and element was fetched (and marked as such).

But … is that really it? A careful reader may have noticed that something was not yet done. The first block is still allocated although no thread is referencing it anymore. Somebody has to release it – but who?

ABA Strikes

The first idea is just to release a block at this point. After all, no other dequeuers are doing anything with this block as all tags are known to be marked as tagRemoving or tagDestroying. So we can safely release the memory, no?

Actually, we can’t. And the reason for that is the ABA problem – and a tough one. It took me many days to find the reason behind the constant crashes I was experiencing when testing that initial approach.

Assume the following situation:

B1:[Removing|value] H:[Allocated|value] … [BlockPointer|B2]
B2:[Allocated|value] H:[Free|0] … [EndOfList|0]

Thread 1 starts executing the Dequeue method. It reads the tag from the tail pointer and is suspended before it can CAS the tagRemoving tag into the tail slot.

      fetch tag from current tail
      if tag = tagFree then // <- here we stop
          return Empty
      if tag = tagAllocated and CAS(tag, tagRemoving) then
          break

Now the fun begins. We have suspended thread with remembered location of the tail pointer pointing to the second slot of the B1 block. I’ll mark this pointer with S: (for Suspended).

B1:[Removing|value] S:H:[Allocated|value] … [BlockPointer|B2]
B2:[Allocated|value] H:[Free|0] … [EndOfList|0]

Another thread takes over and initiates the Dequeue. As the suspended thread was not yet able to change the tag, the second thread succeeds in dequeueing from the second slot and then from the third one and so on, up to the end of the block.

B1:[Removing|value] S:[Removing|value] … H:[BlockPointer|B2]
B2:
[Allocated|value] H:[Free|0] … [EndOfList|0]

During the next dequeue, second thread destroys block B1.

B1:[Removing|value] S:[Removing|value] … [Destroying|B2]
B2:
[Removing|value] T:H:[Free|0] … [EndOfList|0]

Then the third thread writes into all elements of block B2.

B1:[Removing|value] S:[Removing|value] … [Destroying|B2]
B2:
[Removing|value] T:[Allocated|value] … H:[EndOfList|0]

During the next write a memory block is allocated. It may happen (with a high probability, because FastMM memory manager tries to reuse recently released memory blocks) that this memory block will be located at address B1. The block is emptied during the allocation but the suspended thread’s copy of the tail pointer still points into it.

B1:[Allocated|value] H:S:[Free|0] … [EndOfList|0]
B2:[Removing|value] T:[Allocated|value] … [BlockPointer|B1]

Then another slot gets enqueued.

B1:[Allocated|value] S:[Allocated|value] H: … [EndOfList|0]
B2:[Removing|value] T:[Allocated|value] … [BlockPointer|B1]

At that point, the original thread is resumed. It continues the execution with

  if tag = tagAllocated and CAS(tag, tagRemoving) then 
      break 
  if tag = tagAllocated then
      increment tail
      get value

Tag is still tagAllocated (well, it is again set to tagAllocated, but our poor thread doesn’t know that) so it swaps it with tagRemoving. That is weird as we have now a tagRemoving slot in the middle of tagAllocated ones, but that’s maybe something we could live with. The biggest problem lies in the next line which sets the tail pointer to the next slot.And by that I don’t mean the slot relative to the current tail but to the stored tail pointer! In other words, the third slot of block B1.

B1:[Allocated|value] [Removing|value] T:H: … [EndOfList|0]
B2:[Removing|value] [Allocated|value] … [BlockPointer|B1]

And now we have a total mess of a memory layout. From this point onwards, nothing works Surprise

Even worse, that is not the only problem. For instance, the B1 block may have been reallocated by another thread, for another purposes and CAS may have still succeeded if correct bytes are found at that location. Fat chance, I know, but as the Pratchett likes to say, million-to-one chances crop up nine times out of ten.

The same scenario can happen during the Enqueue.

    fetch tag from current head
    //thread pauses here
    if tag = tagFree and CAS(tag, tagAllocating) then

The problem is more likely to occur at this point because CAS will expect the source to be  $00000000. It is entirely possible that another thread allocates this block, clears it for further use, and just the moment after that our suspended thread kicks in and a) destroys this block by CAS-ing tagAllocated in and b) points the tail pointer into that block. Utter disaster.

Garbage Collector

There is just one thing that can be done – never to release a memory block while any thread is using it. In a way, we need a garbage collector.

It is hard to answer the question: “Is any thread using this memory block?” [Not impossible, I must add. Just very hard. And any solution would be totally impractical.] We have to be satisfied with less. Another way to look at the problem is: “When is it safe to release a memory block?” That, at least we can answer: “When Enqueue and Dequeue are not executing. At all. In any thread.” That is also the solution which I’ve implemented.

We must allow many Enqueue/Dequeue paths to execute at the same time, but we only want one thread to be releasing memory and during this time no Enqueue/Dequeue must execute. Does this remind you of anything? Of course, a Multi-Readers-Exclusive-Writer lock!

Enqueue/Dequeue acquire read lock during the execution. Garbage collector acquires write lock, releases the memory and releases the lock. Simple.

The garbage collector is very simple and is implemented in place (as opposed to the implementation in a separate thread). The thread that found the tagBlockPointer is responsible for freeing the memory block.

Enqueue is simply wrapped in the read lock.

acquire read access to GC
// do the Enqueue
release read access to GC

Dequeue is slightly more complicated. If the tagBlockPointer is found then the code releases read lock, acquires write lock and releases the memory block. In other words, Dequeue switches from the dequeueing mode (by releasing the read lock) into garbage collecting mode (by acquiering the write lock).

acquire read access to GC
// do the Dequeue
release read access to GC
if block has to be released
    acquire write access to GC
    release the block
    release write access to GC

MREW implementation is very simple and could theoretically lead to starvation. However, the practical tests confirmed that this does not happen.

One number is used for locking. If it is greater than zero, there are readers active. Each reader increments the number on enter and decrements it on exit.

Writer waits until this number is 0 (no readers) and decrements it to –1. When exiting, it just sets the number back to 0.

Of course, all those increments and decrements are done atomically.

procedure TOmniBaseQueue.EnterReader;
var
value: integer;
begin
repeat
value := obcRemoveCount.Value;
if value >= 0 then
if obcRemoveCount.CAS(value, value + 1) then
break
else
DSiYield; // let the GC do its work
until false;
end; { TOmniBaseQueue.EnterReader }

procedure TOmniBaseQueue.EnterWriter;
begin
while not ((obcRemoveCount.Value = 0) and (obcRemoveCount.CAS(0, -1))) do
asm pause; end; // retry after slight pause
end; { TOmniBaseQueue.EnterWriter }

procedure TOmniBaseQueue.LeaveReader;
begin
obcRemoveCount.Decrement;
end; { TOmniBaseQueue.LeaveReader }

procedure TOmniBaseQueue.LeaveWriter;
begin
obcRemoveCount.Value := 0;
end; { TOmniBaseQueue.LeaveWriter }

This implementation of the queue has passed 24 hour stress test where millions of messages were enqueued and dequeued every second and where from 1 to 8 threads were functioning as a writer and another 1 to 8 as a reader. Every four million messages threads were stopped and content of queues was checked for validity. No problems were found.

What About Performance?

To test the workings of the queue and to measure its performance I wrote a simple test, located in folder 32_Queue in the Tests branch of the OTL tree.

The test framework sets up the following data path:

source queue –> N threads –> channel queue –> M threads –> destination queue

Source queue is filled with numbers from 1 to 1.000.000. Then 1 to 8 threads are set up to read from the source queue and write into the channel queue and another 1 to 8 threads are set up to read from the channel queue and write to the destination queue. Application then starts the clock and starts all threads. When all numbers are moved to the destination queue, clock is stopped and contents of the destination queue are verified. Thread creation time is not included in the measured time.

All in all this results in 2 million reads and 2 million writes distributed over three queues. Tests are very brutal as all threads are just hammering on the queues, doing nothing else. The table below contains average, min and max time of 5 runs on a 2.67 GHz computer with two 4-core CPUs.

  average [min-max] all data in milliseconds millions of queue operations per second
N = 1, M = 1 707 [566-834] 5,66
N = 2, M = 2 996 [950-1031] 4,02
N = 3, M = 3 1065 [1055-1074] 3,76
N = 4, M = 4 1313 [1247-1358] 3,04
N = 8, M = 8 1520 [1482-1574] 2,63
N = 1 , M = 7 3880 [3559-4152] 1,03
N = 7, M = 1 1314 [1299-1358] 3,04

The queue is performing well even when there are twice more threads than cores in the computer. The only anomalous data is in the N=1, M=7 row where there were only eight threads but the performance was quite low. It looks like the single writer was not able to put enough data into the channel queue for seven readers to read and that caused excessive looping in the MREW. But I have no proof for that.

What is Good For?

Absolutely nothingeverything! Of course, this queue was designed as a backing storage for the blocking collection, but it is also useful for any multi-threaded and single-threaded use. Just don’t use it in situation where you can’t control the growth of the data.

For example, all internal messaging queues in the OTL will still use bounded (fixed-size) queues. That way, a message recipient that blocks at some point only causes the queue to fill up which then triggers the exception. If dynamic queue would be used for the messaging, it could fill up all the virtual memory on the computer and only then crash the program (or some other thread would run out of memory and you’d have no idea where the cause of the problem lies).

Use it sparingly, use it wisely.

Labels: , , , , ,

Monday, February 01, 2010

Parallel.ForEach.Aggregate

Totally out-of-band posting (I’m still working on the second part of the “blocking collection” trilogy), posted just because I’m happy that the code works.

This code.

procedure TfrmParallelAggregateDemo.btnCountParallelClick(Sender: TObject);
var
numPrimes: integer;
begin
numPrimes :=
Parallel.ForEach(1, inpMaxPrime.Value)
.Aggregate(
procedure (var aggregate: int64; value: int64) begin
aggregate := aggregate + value;
end)
.Execute(
function (const value: TOmniValue): TOmniValue begin
if IsPrime(value) then
Result := 1
else
Result := 0;
end);
Log('%d primes from 1 to %d', [numPrimes, inpMaxSummand.Value]);
end;

Anonymous methods are simply great. They make the code unreadable, but they are oh so useful!

Everything is in the trunk. Checkout and enjoy.

Labels: , , , , ,