AngryHacker
AngryHacker

Reputation: 61606

What does setting Thread.Priority = Lowest really mean?

In an effort to speed up the startup of my resource-hungry app, I've moved various startup tasks to background threads and marked those thread with 'Thread.Priority = Lowest`.

However, those low priority threads still execute pretty much in parallel with the application (as it loads its UI), as evidenced by the timeline on the ANTS Profiler. My understanding was that Lowest meant that the CPU will handle all higher priority threads first, then get the lower priority threads.

Is my understanding flawed?

Upvotes: 3

Views: 6021

Answers (4)

Erik Aronesty
Erik Aronesty

Reputation: 12877

Most likely the background threads are blocking the main task because they are acquiring some non-cpu resource. Cpu execution is scheduled between threads, other I/O ops are not. For example: if you're loading records from a DB, you could wind up blocking a lot of processing in the main task.

You can get the best of both worlds with the following:

  • Main task sets a signal letting background threads know it's done
  • Background threads run in "nice" mode until that signal is received
  • Nice mode means deliberately closing transactions & i/o handles, sleeping for some time, waking and resuming long-running tasks every 0.1 seconds... explicitly and frequently yielding to the main thread. Deliberately inefficient.
  • When Nice mode is done, the background threads resume
  • The trick is to tune nice mode so that it has no visual impact, while also allowing the background tasks to execute quickly.
  • You can go even further, auto-tuning the background yield times based on the running speed of main task operations

Upvotes: 0

jasonh
jasonh

Reputation: 30293

Is it possible to re-engineer your app so that the threads that you're trying to get to wait until the UI is loaded don't actually run at all until after the UI is loaded? This would do what you wish, forcing them to wait until the UI is loaded (because they're not even created/started), whereas the method you're employing causes them to execute less often, but still execute.

Upvotes: 2

snarf
snarf

Reputation: 2852

The threads may be scheduled with the lowest priority, but they don't wait at the back of the line. They will probably still get enough CPU time slices to gobble up certain resources that are the real bottlenecks, like hard drive access. It really all depends on exactly what you are doing.

Is the initialization computation-intensive? Or web intensive/hard drive intensive. A multi-threading approach is going to be most effective when different tasks use different resources, or to allow computationally intensive operations run without blocking other operations.

A single-threaded approach could feasibly order the tasks to make the application appear to load faster, where-as the multithreaded approach may mean that everyone gets their hands in at the same time, possibly even getting in eachother's way.

Upvotes: 5

Guffa
Guffa

Reputation: 700212

Lowering the priority doesn't mean that the thread will always be the last one picked to get a time slot. If the lower priority thread hasn't got a time slot for a while, it will be more likely to get one. That way lower priority threads will run slower, but not completely stop.

Also, if the main thread is waiting for something, like for example waiting for the disk drive to return data, the other threads can run in that void. If the main thread does a lot of disk I/O, there will be a lot of holes to run other threads in.

If the CPU has more than a single core, the load will be more evenly distributed between threads. No matter how high priority a thread has, it will still only run on one core.

Upvotes: 3

Related Questions