Reputation: 35
Disclaimer: I'm definitely a Tensorflow / Keras beginner, so apologies if this should be clear from the docs or is a nonsensical question.
Basically, I am looking to run multiple Tensorflow threads in parallel while using a single instance of the model in order to process jobs from a Redis queue.
So I had something like:
from models import SomeModel
model = SomeModel()
def run():
while True:
item = my_redis_connection.zpopmin('someKey')
if len(item) > 0:
# process the job
if __name__ == '__main__':
run()
I initially was trying to run multiple tmux sessions, but immediately ran into out of memory errors, so then attempted to use python threading like
if __name__ == '__main__':
thread_one = threading.Thread(target=run)
thread_two = threading.Thread(target=run)
thread_one.start()
thread_two.start()
but got a Tensor Tensor("concatenate_6/concat:0", shape=(?, ?, ?, 2), dtype=float32) is not an element of this graph
error.
Is there a way to do what I'm describing? Or do I just need more GPUs to be able to run in parallel tmux sessions?
Thanks!
Upvotes: 0
Views: 318
Reputation: 2440
Do this
graph = tf.get_default_graph()
def run():
with graph.as_default():
while True:
item = my_redis_connection.zpopmin('someKey')
if len(item) > 0:
# process the job
Also you don't really need 2 GPUs for 2 sessions, you can just set the gpu ram to be use by TF using
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.5 # here
set_session(tf.Session(config=config))
The 0.5 means half of the ram will be use by this session.
BTW if your goal is to make it run faster then having only one graph wouldn't really faster even if you use multithread. I think you should use separate session in that case.
Upvotes: 1