Pickniclas
Pickniclas

Reputation: 359

When should I use Tensorflow variables and when numpy or python variables

I am new to Tensorflow2.0 and try to get familiar with the library. I worked a lot with numpy and noticed that numpy and tensorflow variables are 'compatible'. But if I use a numpy variable or object within tensorflow, does it require tf to convert this variable to a tensor each time? If working with Tensorflow should I just initialize everything as a Tf variable or should I decide based on whether it will be used by numpy or tf? It seems that a lot of mathematical operations are also implemented in tf.math, should I ditch the numpy operations all together? By the way, I won't use Tensorflow for Machine Learning but mainly Tensorflow Probability for sampling etc.

Upvotes: 3

Views: 330

Answers (1)

guorui
guorui

Reputation: 891

Generally speaking, there are three kinds of variables in Tensorflow.

  • var = tf.placeholder() defines a placeholder which is used to receive and feed raining data.
  • var = tf.constant() creates a constant tensor.
  • var = tf.variable() defines a variable that is trainable. Tensorflow will automatically differentiate over this kind of variable. For example, the weights and biases of a neural network should be defined using tf.variable().

Numpy variable is often used to initialize both var = tf.constant() and var = tf.variable(). var = tf.placeholder() doesn't need to be initialized.

By the way, here is a simple tutorial which contains some hands-on examples and may help you get familiar with Tensorflow as quickly as possible.

Upvotes: 2

Related Questions