r1d1
r1d1

Reputation: 469

Is it normal that there is no data access synchronization when training the neural network by several threads?

I looked at the classic word2vec sources, and, if I understood correctly, there is no data access synchronization when training the neural network by several threads (synchronization for matrixes syn0, syn1, syn1neg). Is it normal practice for training, or is it a bug?

Upvotes: 0

Views: 29

Answers (1)

gojomo
gojomo

Reputation: 54173

Perhaps counterintuitively, it's normal. A pioneering work on this was the 'Hogwild' paper in 2011:

https://papers.nips.cc/paper/4390-hogwild-a-lock-free-approach-to-parallelizing-stochastic-gradient-descent

Its abstract:

Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented without any locking. We present an update scheme called Hogwild which allows processors access to shared memory with the possibility of overwriting each other's work. We show that when the associated optimization problem is sparse, meaning most gradient updates only modify small parts of the decision variable, then Hogwild achieves a nearly optimal rate of convergence. We demonstrate experimentally that Hogwild outperforms alternative schemes that use locking by an order of magnitude.

It turns out SGD is more slowed by synchronized access than by threads overwriting each others' work... and some results even seem to hint that in practice, the extra "interference" may be a net benefit for optimization progress.

Upvotes: 1

Related Questions