Reputation: 21
code of tensorflow 2.x as below:
import tensorflow as tf
import os
import tensorflow_datasets as tfds
resolver =tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://'+ os.environ['COLAB_TPU_ADDR'])
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
with tf.device('/TPU:0'):
c = tf.matmul(a, b)
print("c device: ", c.device)
print(c)
@tf.function
def matmul_fn(x, y):
z = tf.matmul(x, y)
return z
z = strategy.run(matmul_fn, args=(a, b))
print(z)
My 1.x code as below:
%tensorflow_version 1.x
import tensorflow as tf
import os
import tensorflow_datasets as tfds
tpu_address = 'grpc://' + os.environ['COLAB_TPU_ADDR']
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu_address)
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
with tf.device('/TPU:0'):
c = tf.matmul(a, b)
print(c)
def matmul_fn(x, y):
z = tf.matmul(x, y)
return z
with tf.Session() as sess:
with strategy.scope():
z = strategy.experimental_run_v2(matmul_fn, args=(a, b))
print(sess.run(z))
Finally, I am so confused about to use TPU in tensorlfow 1.x on colab.
Upvotes: 0
Views: 961
Reputation: 199
To create variables on the TPU, you can create them in a strategy.scope()
context manager. The corrected TensorFlow 2.x code is as follows:
import tensorflow as tf
import os
resolver =tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://'+ os.environ['COLAB_TPU_ADDR'])
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
with strategy.scope():
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
@tf.function
def matmul_fn(x, y):
z = tf.matmul(x, y)
return z
z = strategy.run(matmul_fn, args=(a, b))
print(z)
This runs the tf.function
on all TPU replicas and gives this result:
PerReplica:{
0: tf.Tensor(
[[22. 28.]
[49. 64.]], shape=(2, 2), dtype=float32),
1: tf.Tensor(
[[22. 28.]
[49. 64.]], shape=(2, 2), dtype=float32),
2: tf.Tensor(
[[22. 28.]
[49. 64.]], shape=(2, 2), dtype=float32),
3: tf.Tensor(
[[22. 28.]
[49. 64.]], shape=(2, 2), dtype=float32),
4: tf.Tensor(
[[22. 28.]
[49. 64.]], shape=(2, 2), dtype=float32),
5: tf.Tensor(
[[22. 28.]
[49. 64.]], shape=(2, 2), dtype=float32),
6: tf.Tensor(
[[22. 28.]
[49. 64.]], shape=(2, 2), dtype=float32),
7: tf.Tensor(
[[22. 28.]
[49. 64.]], shape=(2, 2), dtype=float32)
}
To evaluate the same function on the TPU in TF 1.x, you'll need to change strategy.run
to strategy.experimental_run_v2
, create a tf.Session
on the TPU, and call sess.run()
on the list of values returned by experimental_run_v2
.
The initial setup is the same. Change the with
block to the following:
with strategy.scope():
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
@tf.function
def matmul_fn(x, y):
z = tf.matmul(x, y)
return z
with tf.Session(tpu_address) as sess:
z = strategy.experimental_run_v2(matmul_fn, args=(a, b))
print(sess.run(z.values))
This gives the following result:
(array([[22., 28.],
[49., 64.]], dtype=float32), array([[22., 28.],
[49., 64.]], dtype=float32), array([[22., 28.],
[49., 64.]], dtype=float32), array([[22., 28.],
[49., 64.]], dtype=float32), array([[22., 28.],
[49., 64.]], dtype=float32), array([[22., 28.],
[49., 64.]], dtype=float32), array([[22., 28.],
[49., 64.]], dtype=float32), array([[22., 28.],
[49., 64.]], dtype=float32))
I hope this answers your question. For more information about running TensorFlow on TPUs, see the TensorFlow TPU guide. For more information on using distribution strategies without Keras, see the guide on distribution strategies with custom training loops
Upvotes: 0