Reputation: 52987
On my runtime, I have the following Rust code:
pub fn reduce(heap: &Heap, prog: &Program, tids: &[usize], root: u64, debug: bool) -> Ptr {
// Halting flag
let stop = &AtomicBool::new(false);
let barr = &Barrier::new(tids.len());
let locs = &tids.iter().map(|x| AtomicU64::new(u64::MAX)).collect::<Vec<AtomicU64>>();
// Spawn a thread for each worker
std::thread::scope(|s| {
for tid in tids {
s.spawn(move || {
reducer(heap, prog, tids, stop, barr, locs, root, *tid, debug);
});
}
});
// Return whnf term ptr
return load_ptr(heap, root);
}
This will spawn many threads, in order to perform a parallel computation. The problem is, the reduce
function is called thousands of times, and the overhead of spawning threads can be considerable. When I implemented the same thing in C, I just kept the threads open, and sent a message in order to activate them. In Rust, with the std::thread::scope
idiom, I'm not sure how to do so. Is it possible to keep the threads spawned after the first call to reduce, by just modifying that one function? That is, without changing anything else on my code?
Upvotes: 0
Views: 941
Reputation: 584
Threads spawned using the threads::scoped
api won't be able to outlive the the calling function. For long-running threads you'll need to spawn them using std::thread::spawn.
Once you've made that change rustc will be very upset with you due to lifetime errors because you are sending non-static references into the spawned threads. Fixing those errors will require some changes. That path is long and full of learning.
If you want something that works really well and is simple, consider instead using the excellent rayon crate. Rayon maintains it's own global thread pool so you don't have to.
Using rayon will look something like this:
tids.par_iter()
.for_each(|tid| {
reducer(heap, prog, tids, stop, barr, locs, root, *tid, debug);
});
Upvotes: 2