Reputation: 5432
I've a DNN using Tensorflow and it's working just fine. My question is about the weight initialization , here the part of the code where it happens:
def train(numberOfFeatures,numberOFclasses):
#sesssion definition
sess = tf.InteractiveSession()
#input Placeholder
with tf.name_scope('input'):
x =tf.placeholder(tf.float32,[None,numberOfFeatures],name='Features_values')
y_=tf.placeholder(tf.float32,[None,numberOFclasses],name='predictions')
#Weights initialization
def weight_variable(shape):
return tf.Variable(tf.truncated_normal(shape,stddev=0.1))
def bias_variable(shape):
return tf.Variable(tf.constant(0.1,shape=shape))
# define variable summaries
def variable_summaries(var):
with tf.name_scope('summaries'):
mean = tf.reduce_mean(var)
tf.summary.scalar('mean',mean)
with tf.name_scope('stddev'):
stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))
tf.summary.scalar('stddev', stddev)
tf.summary.scalar('max', tf.reduce_max(var))
tf.summary.scalar('min', tf.reduce_min(var))
tf.summary.histogram('histogram', var)
looking at the doc of the function tf.truncated_normal I should get values around -0.1 and +0.1, but it's not the case , as you can see bellow
So my question is what I'm missing here ?
thanks in advance !
Upvotes: 1
Views: 1014
Reputation: 2019
According to the documentation:
values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.
When you use tf.truncated_normal(shape,stddev=0.1)
, your standard deviation is 0.1
, and the mean is the default 0
. Therefore, you only get numbers between -0.2
and +0.2
(since 0.2
is two standard deviation).
If you're wondering why your histogram image seems as if there are samples also just above +0.2
and below -0.2
, the reason has to do with how TensorBoard creates histograms from your data:
TensorFlow [...] doesn't create integer bins. [...] Instead, the bins are exponentially distributed, with many bins close to 0 and comparatively few bins for very large numbers. [...] Visualizing exponentially-distributed bins is tricky; [...] Instead, the histograms resample the data into uniform bins. This can lead to unfortunate artifacts in some cases.
Therefore, these histograms are good indications of the rough distribution of your data, but sometimes you may want to create your own visualizations or metrics.
Upvotes: 3