dooder
dooder

Reputation: 539

TF-slim layers count

Would the code below represent one or two layers? I'm confused because isn't there also supposed to be an input layer in a neural net?

input_layer = slim.fully_connected(input, 6000, activation_fn=tf.nn.relu)
output = slim.fully_connected(input_layer, num_output)

Does that contain a hidden layer? I'm just trying to be able to visualize the net. Thanks in advance!

Upvotes: 2

Views: 480

Answers (2)

Thomas Wagenaar
Thomas Wagenaar

Reputation: 6769

From tensorflow-slim:

Furthermore, TF-Slim's slim.stack operator allows a caller to repeatedly apply the same operation with different arguments to create a stack or tower of layers. slim.stack also creates a new tf.variable_scope for each operation created. For example, a simple way to create a Multi-Layer Perceptron (MLP):

# Verbose way:
x = slim.fully_connected(x, 32, scope='fc/fc_1')
x = slim.fully_connected(x, 64, scope='fc/fc_2')
x = slim.fully_connected(x, 128, scope='fc/fc_3')

# Equivalent, TF-Slim way using slim.stack:
slim.stack(x, slim.fully_connected, [32, 64, 128], scope='fc')

So the network mentioned here is a [32, 64,128] network - a layer with a hidden size of 64.

Upvotes: 0

Sam P
Sam P

Reputation: 1841

enter image description here

You have a neural network with one hidden layer. In your code, input corresponds to the 'Input' layer in the above image. input_layer is what the image calls 'Hidden'. output is what the image calls 'Output'.

Remember that the "input layer" of a neural network isn't a traditional fully-connected layer since it's just raw data without an activation. It's a bit of a misnomer. Those neurons in the picture above in the input layer are not the same as the neurons in the hidden layer or output layer.

Upvotes: 1

Related Questions