Reputation: 47
I am learning Tensorflow to code NN and I have encountered that there are two general "approaches" of programming a NN in Tensorflow, and I would to like to know what is the difference.
There is this simple way, by using predefined functions and options in Tensorflow (for instance):
import tensorflow as tf
import numpy as np
from tensorflow import keras
xs = np.array([1, 2, 3, 4, 5, 6], dtype=int)
ys = np.array([1.0, 1.5, 2.0, 2.5, 3.0, 3.5], dtype=float)
model = keras.Sequential([
keras.layers.Dense(units=1, input_shape=[1])
])
model.compile(optimizer='sgd', loss='mean_squared_error')
model.fit(xs, ys, epochs=500)
The other one is by defining the computational graph, using placeholders, defining explicitly loss functions, batches, etc. Something like this:
x = tf.placeholder(..., dtype='float')
y = tf.placeholder(..., dtype='float')
...
with tf.Session() as sess:
sess.run(init)
opt = sess.run(optimizer, ...)
loss, acc = sess.run([cost, accuracy], ...)
(Similar to what can be found in this tutorial: https://www.datacamp.com/community/tutorials/cnn-tensorflow-python)
What is the difference between these two approaches? Almost all questions in stackoverflow or Tensorflow tutorials use the second approach, but the first one is pretty simpler (in fact, used in the Tensorflow course from deeplearning.ai in Coursera).
Upvotes: 1
Views: 58
Reputation: 2086
Back on the days old version of Tensorflow did not support dynamic computational , so you had to build a graph (just buzy word for variables and constants) then perform computation inside a "Session" this was solved by Tensorflow 2.0 which introduced Tensorflow Eager execution
Keras is just high level in top of Tensorflow so you should rather compare eager execution to Session mode computation not Keras vs "Session"
Upvotes: 1
Reputation: 6220
The first approach is simpler, and you should use it, especially if you're new to this domain and/or these tools. It uses Keras, which is now a high-level API for Tensorflow, meant to simplify NN-programming. You'll only need code of the second type for very specific tasks for which Keras is too high-level.
The second approach is the "historic" one, that corresponds to using only the core of Tensorflow. A lot of the documentation uses it because Keras did not use to exist, and even after it was created, took some time to become the standard way to access TF and to be incorporated to TF, so a lot of people used to not use it, and some of them have still not switched to Keras.
Upvotes: 0