How to parallel code a for-down-to loop in delphi, active over a list, deleting items as you go?

1.5k views Asked by At

Take for example the following code:

for i := (myStringList.Count - 1) DownTo 0 do begin
  dataList := SplitString(myStringList[i], #9);
  x := StrToFloat(dataList[0]);
  y := StrToFloat(dataList[1]);
  z := StrToFloat(dataList[2]);
  //Do something with these variables
  myOutputRecordArray[i] := {SomeFunctionOf}(x,y,z)
  //Free Used List Item 
  myStringList.Delete(i);
end;
//Free Memory
myStringList.Free;

How would you parallelise this using, for example, the OmniThreadLibrary? Is it possible? Or does it need to be restructured?

I'm calling myStringList.Delete(i); at each iteration as the StringList is large and freeing items after use at each iteration is important to minimise memory usage.

3

There are 3 answers

1
Graymatter On

You can cheat. Setting the string value to an empty string will free most of the memory and will be thread safe. At the end of the processing you can then clear the list.

Parallel.ForEach(0, myStringList.Count - 1).Execute(
  procedure (const index: integer)
  var
    dataList: TStringDynArray;
    x, y, z: Single;
  begin
    dataList := SplitString(myStringList[index], #9);
    x := StrToFloat(dataList[0]);
    y := StrToFloat(dataList[1]);
    z := StrToFloat(dataList[2]);
    //Do something with these variables
    myOutputRecordArray[index] := {SomeFunctionOf}(x,y,z);
    //Free Used List Item
    myStringList[index] := '';
  end);
myStringList.Clear;

This code is safe because we are never writing to a shared object from multiple threads. You need to make sure that all of the variables you use that would normally be local are declared in the threaded block.

0
David Heffernan On

I'm not going to attempt to show how to do what you originally asked because it is a bad idea that will not lead to improved performance. Not even assuming that you deal with the many and various data races in your proposed parallel implementation.

The bottleneck here is the disk I/O. Reading the entire file into memory, and then processing the contents is the design choice that is leading to your memory problems. The correct way to solve this problem is to use a pipeline.

Step 1 of the pipeline takes as input the file on disk. The code here reads chunks of the file and then breaks those chunks into lines. These lines are the output of this step. The entire file is never in memory at one time. You'll have to tune the size of the chunks that you read.

Step 2 takes as input the strings the step 1 produced. Step 2 consumes those strings and produces vectors. Those vectors are added to your vector list.

Step 2 will be faster than step 1 because I/0 is so expensive. Therefore there's nothing to be gained by trying to optimise either of the steps with parallel algorithms. Even on a uniprocessor machine this pipelined implementation could be faster than non-pipelined.

0
Mason Wheeler On

Simple answer: You wouldn't.

More involved answer: The last thing you want to do in a parallelized operation is modify shared state, such as this delete call. Since it's not guaranteed that each individual task will finish "in order"--and in fact it's highly likely that they won't at least once, with that probability approaching 100% very quickly the more tasks you add to the total workload--trying to do something like that is playing with fire.

You can either destroy the items as you go and do it serialized, or do it in parallel, finish faster, and destroy the whole list. But I don't think there's any way to have it both ways.