Parallel For causes massive lag spikes after 2,3 minutes

316 views Asked by At

Edit:
I have noticed these lag spikes only occur while debugging in visual studio. If I run the .exe outside of Visual Stduio,the program doesnt use more than 3% of the CPU.Can anyone tell me why this is happening?


I have encountered a problem with Parallel processing.I'm using Parallel.For to check a large number of proxies (by making webrequests).This is my function:

private ConcurrentBag<string> TotalProxies = new ConcurrentBag<string>();
private void CheckProxies()
{
    ParallelOptions pOptions = new ParallelOptions();
    pOptions.MaxDegreeOfParallelism = 100;
    int max = TotalProxies.Count;
    Invoke(new Action(() => { lbl_Status.Text = "Checking"; }));
    Parallel.For(0, max, pOptions, (index, loopstate) =>
    {
        string Proxy = TotalProxies.ElementAt(index);
        if (WebEngine.IsProxyWorking(Proxy))
        {
            WorkingProxies.Add(Proxy);
            workingp++;
            Invoke(new Action(() =>
            {
                lstv_Working.Items.Add(Proxy);
                lbl_Working.Text = workingp.ToString();
            }));
        }
        checkedp++;
        Invoke(new Action(() => { lbl_Checked.Text = checkedp.ToString(); }));

        if (Stop)
            loopstate.Stop();
    });
    Invoke(new Action(() => {
        lbl_Status.Text = "Idle";
    }));
}

My problem is the following:
The program works fine for the first 0-2000 requests,where the cpu usage is around 3-5%.Then,after 2-3 minutes, I have encountered massive and frequent lag spikes,causing the CPU usage to jump up to 100%. I have no idea why this is happening,since it worked fine until now.I hope someone can help me understand what causes this.
Here you can see my problem:Lag spikes here

2

There are 2 answers

3
Me.Name On BEST ANSWER

As promised an example with async/await, although seeing your update I'm not sure if it will make a difference. But since it won't fit inside a comment, posted it here ;)

private ConcurrentBag<string> TotalProxies = new ConcurrentBag<string>();
private async Task CheckProxies()
{
    lbl_Status.Text = "Checking"; //NB, invoking is omitted assuming that CheckProxies is called from the UI thread itself
    var tasks = TotalProxies.Select(CheckProxy);
    await Task.WhenAll(tasks);
    lbl_Status.Text = "Idle";
}

private async Task<bool> CheckProxy(string p)
{   
    bool working = await Task.Run(() => WebEngine.IsProxyWorking(p)); //would be better if IsProxyWorking itself uses async methods and returns a task, so Task.Run isn't needed. Don't know if it's possible to alter that function?
    if(working)
    {
        WorkingProxies.Add(p);
        workingp++; //Interlocked.Increment is not necessary because after the await we're back in the main thread
        lstv_Working.Items.Add(p);  //are these items cleared on a new run? 
        lbl_Working.Text = workingp.ToString();
    }
    checkedp++;
    lbl_Checked.Text = checkedp.ToString(); 
    return working;
}

Note, since I couldn't the test the actual code, I'm not sure about the efficiency. Your current code might perform better. But if the IsProxyWorking method could use actual async webcalls (I believe that code was previously included in your post), I believe the processing could really improve.

0
Rodrigo Vedovato On

I don't know if this is directly related to your issue, but setting the MaxDegreeOfParallelism to 100 is not good. You're basically telling your application to perform 100 tasks at the same time! According to MSDN

Generally, you do not need to modify this setting. However, you may choose to set it explicitly in advanced usage scenarios such as these:

  • When you know that a particular algorithm you're using won't scale beyond a certain number of cores. You can set the property to avoid wasting cycles on additional cores.

  • When you're running multiple algorithms concurrently and want to manually define how much of the system each algorithm can utilize. You can set a P:System.Threading.Tasks.ParallelOptions.MaxDegreeOfParallelism value for each.

  • When the thread pool's heuristics is unable to determine the right number of threads to use and could end up injecting too many threads. For example, in long-running loop body iterations, the thread pool might not be able to tell the difference between reasonable progress or livelock or deadlock, and might not be able to reclaim threads that were added to improve performance. In this case, you can set the property to ensure that you don't use more than a reasonable number of threads.

I would try to remove this value and see how your application behaves!